
Embeddings for numerical features in tabular deep learning
The ubiquitous transformers

The crucial insights on the importance of input representation

Piecewise linear encodings

Encoding with periodic activation functions

Using numerical feature embeddings

Results
Summary
References
- Y. Gorishniy, I. Rubachev, A. Babenko. “On embeddings for numerical features in tabular deep learning.” NeurIPS 2022.←
- Y. Gorishniy, I. Rubachev, V. Khrulkov, A. Babenko. “Revisiting deep learning models for tabular data.” NeurIPS 2021.←
- G. Somepalli et al. “SAINT: Improved neural networks for tabular data via row attention and contrastive pre-training.”←
- M. Tancik et al. “Fourier features let networks learn high frequency functions in low dimensional domains.” NeurIPS 2020.←