Learning in TensorFlow Part 2 Fine tuning\
1:07 Building Mdl 2 (with a data augmentation layer and 10% of training data)
17:44 Creating a ModelCheckpoint to save our model’s weights during training
25:09 Fitting and evaluating Mdl 2 (and saving its weights using ModelCheckpoint)
32:23 Loading and comparing saved weights to our existing trained Mdl 2
39:41 Preparing Mdl 3 (our first fine-tuned model)
1:00:08 Fitting and evaluating Mdl 3 (our first fine-tuned model)
1:07:54 Comparing our model’s results before and after fine-tuning
1:18:21 Downloading and preparing data for our biggest experiment yet (Mdl 4)
1:24:46 Preparing our final modelling experiment (Mdl 4)
1:36:47 Fine-tuning Mdl 4 on 100% of the training data and evaluating its results
1:47:06 Comparing our modelling experiment results in TensorBoard
1:57:52 How to view and delete previous TensorBoard experiments
Learning with TensorFlow Part 3 Scaling Up\
1:59:57 Introduction to Transfer Learning Part 3 Scaling Up
2:06:17 Getting helper functions ready and downloading data to model
2:19:51 Outlining the Mdl we’re going to build and building a ModelCheckpoint callback
2:25:30 Creating a data augmentation layer to use with our model
2:30:09 Creating a headless EfficientNetB0 Mdl with data augmentation built in
2:39:08 Fitting and evaluating our biggest transfer learning Mdl yet
2:47:05 Unfreezing some layers in our base Mdl to prepare for fine-tuning
2:58:34 Fine-tuning our feature extraction Mdl and evaluating its performance
3:06:58 Saving and loading our trained model
3:13:24 Downloading a pretrained Mdl to make and evaluate predictions with
3:19:58 Making predictions with our trained Mdl on 25,250 test samples
3:32:45 Unravelling our test dataset for comparing ground truth labels to predictions
3:38:50 Confirming our model’s predictions are in the same order as the test labels
3:44:07 Creating a confusion matrix for our model’s 101 different classes
3:56:15 Evaluating every individual class in our dataset
4:10:32 Plotting our model’s F1-scores for each separate class
4:18:08 Creating a function to load and prepare images for making predictions
4:30:17 Making predictions on our test images and evaluating them
4:46:23 Discussing the benefits of finding your model’s most wrong predictions
4:52:33 Writing code to uncover our model’s most wrong predictions
5:03:49 Plotting and visualising the samples our Mdl got most wrong
5:14:26 Making predictions on and plotting our own custom images
Project 1 Food Vision Big™\
5:24:16 Making sure we have access to the right GPU for mixed precision training
5:34:33 Getting helper functions ready
5:37:40 Introduction to TensorFlow Datasets (TFDS)
5:49:43 Exploring and becoming one with the data (Food101 from TensorFlow Datasets)
6:05:39 Creating a preprocessing function to prepare our data for modelling
6:21:29 Batching and preparing our datasets (to make them run fast)
6:35:17 Exploring what happens when we batch and prefetch our data
6:42:06 Creating modelling callbacks for our feature extraction model
6:49:21 Turning on mixed precision training with TensorFlow
6:59:26 Creating a feature extraction Mdl capable of using mixed precision training
7:12:09 Checking to see if our Mdl is using mixed precision training layer by layer
7:20:05 Introducing your Milestone Project 1 challenge build a Mdl to beat DeepFood
Fundamentals in TensorFlow\
7:27:53 Introduction to Natural Language Processing (NLP) and Sequence Problems
7:40:45 Example NLP inputs and outputs
7:48:08 The typical architecture of a Recurrent Neural Network (RNN)
7:57:11 Preparing a notebook for our first NLP with TensorFlow project
8:06:04 Becoming one with the data and visualising a text dataset
8:22:45 Splitting data into training and validation sets
8:29:12 Converting text data to numbers using tokenisation and embeddings (overview)
8:38:35 Setting up a TensorFlow TextVectorization layer to convert text to numbers
8:55:45 Mapping the TextVectorization layer to text data and turning it into numbers
9:06:48 Creating an Embedding layer to turn tokenised text into embedding vectors
9:19:16 Discussing the various modelling experiments we’re going to run
9:28:13 Mdl 0 Building a baseline Mdl to try and improve upon
9:37:39 Creating a function to track and evaluate our model’s results
9:49:53 Mdl 1 Building, fitting and evaluating our first deep Mdl on text data
10:10:45 Visualising our model’s learned word embeddings with TensorFlow’s projector tool
10:31:29 High-level overview of Recurrent Neural Networks (RNNs) where to learn more
10:41:03 Mdl 2 Building, fitting and evaluating our first TensorFlow RNN Mdl (LSTM),a GRU-cell powered RNN,a bidirectional RNN model
11:35:51 Discussing the intuition behind Conv1D neural networks for text and sequences
11:55:23 Mdl 5 Building, fitting and evaluating a 1D CNN for text