Announcing The Little Learner: A Straight Line to Deep Learning

Daniel P. Friedman and I (Anurag Mendhekar) are pleased to announce that our upcoming book The Little Learner: A Straight Line to Deep Learning just got its release date, complete with a Preorder Sale (Barnes and Noble, 25%) The book comes out on 2/21/2023.

"The Little Learner" covers all the concepts necessary to develop an intuitive understanding of the workings of deep neural networks: tensors, extended operators, gradient descent algorithms, artificial neurons, dense networks, convolutional networks, residual networks and automatic differentiation.

The authors aim to explain the workings of Deep Learning to readers who may not have the mathematical sophistication necessary to read the existing literature on the subject. Unlike other books in the field, this book makes very few assumptions about background knowledge (high-school mathematics and familiarity with programming). The authors use a layered approach to construct advanced concepts from first principles using really small (“little”) programs that build on one another. This is one of the things that makes this book unique.

The other is that it introduces these ideas using a conversational style in Question/Answer format that is characteristic of the other books in the Little series. The conversational style puts the reader at ease and enables the introduction of ideas in frame-by-frame manner as opposed to being hit with a wall of text.

It is (of course!) written using elementary Scheme and the code will be released as a Racket package.

26 Likes

Thank you @themetaschemer :racket_heart:

I’ve added it to Books · racket/racket Wiki · GitHub

Is there a publisher page I can link to?

Best regards

Stephen
:beetle:

This is awesome! Are you able to share the table of contents by any chance?

1 Like

I have to wait until Feb 23? The cruelty!

VERY excited to read this—I’ve wanted something like this for a long time. If you want an early reader with a mind unspoiled by any particular intelligence or preexisting deep understanding of the domain, I offer up mine. :wink:

4 Likes

Thanks Stephen! The MIT Press hasn't put up a page for it yet, but both Amazon and BN have.

2 Likes

Laurent, our ToC by itself is a bit cryptic, but should give you some idea. The sequencing goes like this: Minimal Scheme Intro for those who don't know it; Minimal machine learning by hand; Tensors; Operator extension; Gradient descent; Stochastic Gradient descent and variations; Neurons + Universal approximation; Structuring Neural networks; Classification using dense layers; Signals; Convolutional layers; and two appendices that are entirely dedicated to Automatic Differentiation.

5 Likes

Thanks, Pete! Will keep that in mind!

1 Like

This will definitely be on my bookshelves come winter/spring next year. Wonderful!

3 Likes