\n",
"## \n",
" **Hints**\n",
"

\n",
"

\n", "

- \n",
"
- Use the line_to_tensor function above inside a list comprehension in order to pad lines with zeros. \n", "
- Keep in mind that the length of the tensor is always 1 + the length of the original line of characters. Keep this in mind when setting the padding of zeros. \n", "

\n",
"## \n",
" **Hints**\n",
"

\n",
"

\n", "

- \n",
"
- To convert the target into the same dimension as the predictions tensor use tl.one.hot with target and preds.shape[-1]. \n", "
- You will also need the np.equal function in order to unpad the data and properly compute perplexity. \n", "
- Keep in mind while implementing the formula above that
*w*represents a letter from our 256 letter alphabet._{i}\n",
"