Hello Everyone,
In this 3 post series, we are going to see how to deploy a deep learning model, which performs image classification in AWS and exposes its API to the world, so others can interact with your API.
Go to Post 1: Link
Go to Post 2: You are exactly there, where you should be.
Go to Post 3: Link
1. AWS Lambda (Part — 2)

In our first post, we addressed the limitation of AWS Lambda on why it cannot handle the sizes > 250 MB of uncompressed deployment packages, therefore we attached EFS which is like the Google Drive of AWS, where you can scale horizontally and load your deployment packages there.
So in this post, we are going to add deployment packages into AWS Lambda using both AWS Lambda Layers and AWS EFS.
1. Let’s get started.
IAM role is like providing permissions for one AWS Services to access other services. In our case Lambda function is going to access EFS(for deployment packages) and S3 (to store model artifacts and store final output results).
Navigate to IAM Console and click on Create Role and choose a use case as Lambda and click Next: Permissions


Now Attach the below-listed policies and then add tags of your choice.

Finally, your console should look like this and then click CreateRole.


2. Now let’s navigate to AWS Lambda. Now Click Create function..

3. Choose the template -> Author from scratch, provide a function name of your choice, and for our ease, I am selecting the Runtime as Python 3.6. (You can choose any runtime like python 3.7/3.8 based on how you trained your deep learning model).

4. Under the Permission tab, you need to select an execution role created under the IAM policies, which we created in the previous step. Then click create function.

Now we are on the home page of our newly created function. Here we will tweak some parameters step by step.
Import Packages using Lambda Layers

5. First, we will create a Lambda Layer to demonstrate how we can load some lightweight packages into our function code. So in this example, I will show how to load the Scikit-Learn Package into lambda function using the Lambda Layers. Layers are like the backend for the function, where we can place some dependencies.
For lambda layers to work, we need to zip our package in the below format.
You can choose any other packages of interest.
scikit-image.zip → python\lib\python3.6\site-packages

6. After creating the zip file of your choice, upload the zip file into one of the S3 buckets you have or create a new S3 bucket and upload the zip files. In my case, I have 2 packages, one containing scikit-image and other packages with some other dependency and then copy the object URL.

7. Go to the Lambda Home page and click the Layers tab and click the button Create Layer. Then provide name and description, also attach the link of your S3 bucket containing the zip file under the S3 link URL and choose appropriate Runtime.



8. After creating layers, go to your lambda function and add your layer.



Now we can see one of the layers attached to our function. We shall try to import this module into our function.

When we click on the Test button, it prompts to configure test events. Just provide an Event name and click save.
9. Modify your lambda function accordingly and click save and click Test to run your lambda function. Here the output prints out the skimage version as 0.16.2. This is how we import packages using lambda layers.

Import Packages using EFS
So if we have more packages to import, then lambda layers cannot handle it, so we configure the EFS as python backend.
Let’s get started.
Under your Lambda function, if you scroll you will see the sections of the VPC and File system, now let’s configure these tabs by clicking edit.

1. Under VPC, choose the default VPC (you can create custom vpc but it’s expensive when you add internet gateway etc.)
2. Select all 3 subnets, so this lambda function is available across the subnets.
3. Select the security group that we created in post 1 of this series and click save.

4. Now let’s configure the Elastic File system.

5. For the EFS file system, select the EFS which we created in post 1, and select the access point for the EFS, in our case we created /demo. Select the appropriate Access point.
And finally, add local mount path, this is the path where we store our packages and access them across lambda functions.
By AWS standards it has to start with “/mnt/demo” and click save.



Now we have successfully configured both VPC & EFS mount points. Note we have a local mount point: “/mnt/demo”.
So we load our deployment packages (heavy) under this path and import them in the lambda function.
6. So to load some packages we can follow 2 methods.
I. If you already have all the packages on your local computer, then simply copy it and paste them in the mount location using WinSCP.
II. Connect to EFS via EC2 instance and then simply follow the command
→ pip3 install "Package_name" -t /efs/demo/ — system
Now we can see our EFS file system both via putty and WinSCP, we have the demo folder we mounted.

7. Now let’s add the PyTorch package into our EFS and try to import it via lambda function.
Command : pip3 install torch==1.5.1+cpu torchvision==0.6.1+cpu -f https://download.pytorch.org/whl/torch_stable.html -t /efs/demo/ — system

Also, we confirm via WinSCP, that all torch based packages are loaded. Now let’s try to import them in the lambda function.


Also Under the basic settings, increase the timeout depending on your requirement. The maximum time is 15 mins, that’s the hard limit and you can increase the memory limit up to 3GB (RAM) for faster processing. Since AWS Lambda pricing is based on both Execution time and API Calls count, you can configure them based on the needs.
Typically Timeout and Memory are in synergy, there’s a trade-off between them.
8. Great, now we need to add the path of our torch directory mounted inside the EFS to our python file system on the lambda function.So when python executes it adds the directory into account and loads the torch packages.
C1: import sys
C2: sys.path.insert(0, ‘/mnt/demo/’) # For python 3.6
Here in the outputs, we can actually see the version of the torch packages loaded and the mount path is also added to the system.

9. So far I demonstrated the torch packages, later you can add other necessary packages such as six, requests, pillow, OpenCV, open3d, NumPy, matplotlib, fastai, cv2, statistics, requests etc, using pip3 command in the same directory “/efs/demo“ via EC2 and call them using the mount point “/mnt/demo” in AWS LAMBDA.
Also, it is a good measure to note the Metered size of our EFS, since EFS is costlier than S3, make sure you utilize it properly. EFS Typically charges

Endpoint Creation
10. Now Since we connected the EFS to our Lambda via secure VPC, by default in AWS there is no internet access available to your function (you can create a new VPC and add NAT gateway for internet, but it costs ($0.056 per NAT Gateway Hour) and also you cannot establish any connection to other AWS Services using boto3 library. More explained here.
But fortunately there we can connect our lambda function to only 2 services via endpoints and later access them via boto3 library.
1. Connect to S3 (via Endpoint)
2. Connect to DynamoDb (via Endpoint)
In order to connect to the S3 bucket for data transfer, do the below configuration.
Under the VPC tab, navigate to Endpoints and Create Endpoint.
Then search for S3 under the AWS services, check the subnets group and finally click create an endpoint and later wait until the status turns available.





Do the same action and search for dynamodb if you wish to connect dynamodb database to your lambda function.
Now we have successfully completed how to perform deployment of heavy packages into AWS Lambda.
Now continue reading Post 3 for deploying our deep learning model and call it via API Gateway.
It is best practices to check the pricing of AWS Lambda: https://aws.amazon.com/lambda/pricing/
Until then, see you next time.
Categories: Machine Learning, Deep Learning , Tags: #AI, #deepscopy