Learn practical skills, build real-world projects, and advance your career
!pip install jovian --upgrade --quiet
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.preprocessing.text import Tokenizer

1. Using Tokenizer

train_data = [
              'I love my dog',
              'I love my cat',
              "You love deep learning?"
]
tokenizer = Tokenizer(num_words=100, oov_token="<OOV>")
tokenizer.fit_on_texts(train_data)
word_index = tokenizer.word_index
print(word_index)
{'<OOV>': 1, 'love': 2, 'i': 3, 'my': 4, 'dog': 5, 'cat': 6, 'you': 7, 'deep': 8, 'learning': 9}

1b) Love is now the most common word and therefore first in the list.