{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "2397c24f",
   "metadata": {},
   "source": [
    "# Basic Attention Operation: Ungraded Lab\n",
    "\n",
    "As you've learned, attention allows a seq2seq decoder to use information from each encoder step instead of just the final encoder hidden state. In the attention operation, the encoder outputs are weighted based on the decoder hidden state, then combined into one context vector. This vector is then used as input to the decoder to predict the next output step.\n",
    "\n",
    "In this ungraded lab, you'll implement a basic attention operation as described in [Bhadanau, et al (2014)](https://arxiv.org/abs/1409.0473) using Numpy. I'll describe each of the steps which you will be coding."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "c4ac8357",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Run this first, a bit of setup for the rest of the lab\n",
    "import numpy as np\n",
    "\n",
    "def softmax(x, axis=0):\n",
    "    \"\"\" Calculate softmax function for an array x along specified axis\n",
    "    \n",
    "        axis=0 calculates softmax across rows which means each column sums to 1 \n",
    "        axis=1 calculates softmax across columns which means each row sums to 1\n",
    "\"\"\"\n",
    "    return np.exp(x) / np.expand_dims(np.sum(np.exp(x), axis=axis), axis)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7b5c8948",
   "metadata": {},
   "source": [
    "## 1: Calculating alignment scores\n",
    "\n",
    "The first step is to calculate the alignment scores. This is a measure of similarity between the decoder hidden state and each encoder hidden state. From the paper, this operation looks like\n",
    "\n",
    "$$\n",
    "\\large e_{ij} = v_a^\\top \\tanh{\\left(W_a s_{i-1} + U_a h_j\\right)}\n",
    "$$\n",
    "\n",
    "where $W_a \\in \\mathbb{R}^{n\\times m}$, $U_a \\in \\mathbb{R}^{n \\times m}$, and $v_a \\in \\mathbb{R}^m$\n",
    "are the weight matrices and $n$ is the hidden state size. In practice, this is implemented as a feedforward neural network with two layers, where $m$ is the size of the layers in the alignment network. It looks something like:\n",
    "\n",
    "![alignment model](./images/alignment_model_3.png)\n",
    "\n",
    "Here $h_j$ are the encoder hidden states for each input step $j$ and $s_{i - 1}$ is the decoder hidden state of the previous step. The first layer corresponds to $W_a$ and $U_a$, while the second layer corresponds to $v_a$.\n",
    "\n",
    "To implement this, first concatenate the encoder and decoder hidden states to produce an array with size $K \\times 2n$ where $K$ is the number of encoder states/steps. For this, use `np.concatenate` ([docs](https://numpy.org/doc/stable/reference/generated/numpy.concatenate.html)). Note that there is only one decoder state so you'll need to reshape it to successfully concatenate the arrays. The easiest way is to use `decoder_state.repeat` ([docs](https://numpy.org/doc/stable/reference/generated/numpy.repeat.html#numpy.repeat)) to match the hidden state array size.\n",
    "\n",
    "Then, apply the first layer as a matrix multiplication between the weights and the concatenated input. Use the tanh function to get the activations. Finally, compute the matrix multiplication of the second layer weights and the activations. This returns the alignment scores."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "f277c4a9",
   "metadata": {},
   "outputs": [],
   "source": [
    "hidden_size = 16 # K\n",
    "attention_size = 10 # n\n",
    "input_length = 5 # K\n",
    "\n",
    "np.random.seed(42)\n",
    "\n",
    "# Synthetic vectors used to test\n",
    "encoder_states = np.random.randn(input_length, hidden_size)\n",
    "decoder_state = np.random.randn(1, hidden_size)\n",
    "\n",
    "# Weights for the neural network, these are typically learned through training\n",
    "# Use these in the alignment function below as the layer weights\n",
    "layer_1 = np.random.randn(2*hidden_size, attention_size)\n",
    "layer_2 = np.random.randn(attention_size, 1)\n",
    "\n",
    "# Implement this function. Replace None with your code. Solution at the bottom of the notebook\n",
    "def alignment(encoder_states, decoder_state):\n",
    "    # First, concatenate the encoder states and the decoder state\n",
    "    inputs = np.hstack([encoder_states, decoder_state.repeat(input_length, axis=0)])\n",
    "    assert inputs.shape == (input_length, 2*hidden_size)\n",
    "    \n",
    "    # Matrix multiplication of the concatenated inputs and layer_1, with tanh activation\n",
    "    activations = np.tanh(inputs@layer_1)\n",
    "    assert activations.shape == (input_length, attention_size)\n",
    "    \n",
    "    # Matrix multiplication of the activations with layer_2. Remember that you don't need tanh here\n",
    "    scores = activations@layer_2\n",
    "    assert scores.shape == (input_length, 1)\n",
    "    \n",
    "    return scores\n",
    "# decoder_state.repeat(input_length, axis=0).shape\n",
    "# encoder_states.shape\n",
    "\n",
    "# np.hstack([encoder_states, decoder_state.repeat(input_length, axis=0)]).shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "bc70dc46",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[[4.35790943]\n",
      " [5.92373433]\n",
      " [4.18673175]\n",
      " [2.11437202]\n",
      " [0.95767155]]\n"
     ]
    }
   ],
   "source": [
    "# Run this to test your alignment function\n",
    "scores = alignment(encoder_states, decoder_state)\n",
    "print(scores)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8664084e",
   "metadata": {},
   "source": [
    "If you implemented the function correctly, you should get these scores:\n",
    "\n",
    "```python\n",
    "[[4.35790943]\n",
    " [5.92373433]\n",
    " [4.18673175]\n",
    " [2.11437202]\n",
    " [0.95767155]]\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d597b5ee",
   "metadata": {},
   "source": [
    "## 2: Turning alignment into weights\n",
    "\n",
    "The next step is to calculate the weights from the alignment scores. These weights determine the encoder outputs that are the most important for the decoder output. These weights should be between 0 and 1, and add up to 1. You can use the softmax function (which I've already implemented above) to get these weights from the attention scores. Pass the attention scores vector to the softmax function to get the weights. Mathematically,\n",
    "\n",
    "$$\n",
    "\\large \\alpha_{ij} = \\frac{\\exp{\\left(e_{ij}\\right)}}{\\sum_{k=1}^K \\exp{\\left(e_{ik}\\right)}}\n",
    "$$\n",
    "\n",
    "\n",
    "\n",
    "## 3: Weight the encoder output vectors and sum\n",
    "\n",
    "The weights tell you the importance of each input word with respect to the decoder state. In this step, you use the weights to modulate the magnitude of the encoder vectors. Words with little importance will be scaled down relative to important words. Multiply each encoder vector by its respective weight to get the alignment vectors, then sum up the weighted alignment vectors to get the context vector. Mathematically,\n",
    "\n",
    "$$\n",
    "\\large c_i = \\sum_{j=1}^K\\alpha_{ij} h_{j}\n",
    "$$\n",
    "\n",
    "Implement these steps in the `attention` function below."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "3a459fc2",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[-0.63514569  0.04917298 -0.43930867 -0.9268003   1.01903919 -0.43181409\n",
      "  0.13365099 -0.84746874 -0.37572203  0.18279832 -0.90452701  0.17872958\n",
      " -0.58015282 -0.58294027 -0.75457577  1.32985756]\n"
     ]
    }
   ],
   "source": [
    "# Implement this function. Replace None with your code.\n",
    "def attention(encoder_states, decoder_state):\n",
    "    \"\"\" Example function that calculates attention, returns the context vector \n",
    "    \n",
    "        Arguments:\n",
    "        encoder_vectors: NxM numpy array, where N is the number of vectors and M is the vector length\n",
    "        decoder_vector: 1xM numpy array, M is the vector length, much be the same M as encoder_vectors\n",
    "    \"\"\" \n",
    "    \n",
    "    # First, calculate the alignment scores\n",
    "    scores = alignment(encoder_states, decoder_state)\n",
    "    \n",
    "    # Then take the softmax of the alignment scores to get a weight distribution\n",
    "    weights = softmax(scores)\n",
    "    \n",
    "    # Multiply each encoder state by its respective weight\n",
    "    weighted_scores = encoder_states*weights\n",
    "    \n",
    "    # Sum up weighted alignment vectors to get the context vector and return it\n",
    "    context = np.sum(weighted_scores, axis=0)\n",
    "    return context\n",
    "\n",
    "context_vector = attention(encoder_states, decoder_state)\n",
    "print(context_vector)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5fc2f990",
   "metadata": {},
   "source": [
    "If you implemented the `attention` function correctly, the context vector should be\n",
    "\n",
    "```python\n",
    "[-0.63514569  0.04917298 -0.43930867 -0.9268003   1.01903919 -0.43181409\n",
    "  0.13365099 -0.84746874 -0.37572203  0.18279832 -0.90452701  0.17872958\n",
    " -0.58015282 -0.58294027 -0.75457577  1.32985756]\n",
    "```\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5b73a23f",
   "metadata": {},
   "source": [
    "## See below for solutions"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0184d87d",
   "metadata": {},
   "source": [
    "```python\n",
    "# Solution\n",
    "def alignment(encoder_states, decoder_state):\n",
    "    # First, concatenate the encoder states and the decoder state.\n",
    "    inputs = np.concatenate((encoder_states, decoder_state.repeat(input_length, axis=0)), axis=1)\n",
    "    assert inputs.shape == (input_length, 2*hidden_size)\n",
    "    \n",
    "    # Matrix multiplication of the concatenated inputs and the first layer, with tanh activation\n",
    "    activations = np.tanh(np.matmul(inputs, layer_1))\n",
    "    assert activations.shape == (input_length, attention_size)\n",
    "    \n",
    "    # Matrix multiplication of the activations with the second layer. Remember that you don't need tanh here\n",
    "    scores = np.matmul(activations, layer_2)\n",
    "    assert scores.shape == (input_length, 1)\n",
    "    \n",
    "    return scores\n",
    "\n",
    "# Run this to test your alignment function\n",
    "scores = alignment(encoder_states, decoder_state)\n",
    "print(scores)\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8cbe10b0",
   "metadata": {},
   "source": [
    "```python\n",
    "# Solution\n",
    "def attention(encoder_states, decoder_state):\n",
    "    \"\"\" Example function that calculates attention, returns the context vector \n",
    "    \n",
    "        Arguments:\n",
    "        encoder_vectors: NxM numpy array, where N is the number of vectors and M is the vector length\n",
    "        decoder_vector: 1xM numpy array, M is the vector length, much be the same M as encoder_vectors\n",
    "    \"\"\" \n",
    "    \n",
    "    # First, calculate the dot product of each encoder vector with the decoder vector\n",
    "    scores = alignment(encoder_states, decoder_state)\n",
    "    \n",
    "    # Then take the softmax of those scores to get a weight distribution\n",
    "    weights = softmax(scores)\n",
    "    \n",
    "    # Multiply each encoder state by its respective weight\n",
    "    weighted_scores = encoder_states * weights\n",
    "    \n",
    "    # Sum up the weights encoder states\n",
    "    context = np.sum(weighted_scores, axis=0)\n",
    "    \n",
    "    return context\n",
    "\n",
    "context_vector = attention(encoder_states, decoder_state)\n",
    "print(context_vector)\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "64e81419",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.13"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}