Today I’m speaking with Yury Gorishniy about the state of the competition between Deep Learning and Gradient Boosted Decision Trees when it comes to tabular datasets, and about a recent paper he published that seems to take a stab at improving the state of deep learning on tabular datasets.
We discuss whether or not there exists a gap between deep learning and gradient boosted decision trees, what the future of a gap might look like, and the extent to which the embedding of numerical features can give deep learning architectures a necessary boost in performance.
Two of his recent papers are useful in this discussion:
You can find Yury in the following places:
Enjoy!