This chapter explores the limitations of current deep learning systems. Despite their successes in tasks like image recognition, these systems differ significantly from human learning. The reliance on vast labeled datasets, the inability to handle unexpected situations ("long tail"), and susceptibility to adversarial attacks are highlighted as key weaknesses. Furthermore, the lack of explainability and the presence of biases
in training data raise concerns about the trustworthiness and
reliability of these systems in real-world applications. The chapter
concludes by questioning whether these systems' limitations will
ultimately hinder their broader adoption.