Hello Everyone,

In this 3 post series, we are going to see how to deploy a deep learning model, which performs image classification in AWS and exposes its API to the world, so others can interact with your API.

Go to Post 1: You are exactly there, where you should be.

Go to Post 2: Link

Go to Post 3: Link

As stated before we are going to use 3 services in AWS.



1. AWS Elastic File System (Part - 1)

First of all, EFS is a cloud storage service provided by Amazon Web Services designed to provide scalable, elastic, concurrent with some restrictions, and encrypted file storage for use with both AWS cloud services and on-premises resources.

The reason why we include EFS into our implementation is because of the limitation imposed by AWS Lambda, simply to run a deep learning model we require a lot of dependencies along with your preference of DL libraries like Keras, Tensorflow or Pytorch.

AWS Lambda currently supports a hard limit of 50MB for compressed deployment package with AWS Lambda and an uncompressed AWS Lambda hard limit of 250MB.

For real-world deployment, we might need dependencies for at least 1 GB, so in order to overcome this issue, we are making use of AWS Elastic File System, where we can load our deployment package and add this path to our python library and it can be imported without any issues.

But here I will add the implementation of both practices, we shall import some libraries from AWS Lambda Layers and some from the AWS EFS.

2. AWS Lambda (Part - 2)

AWS Lambda is completely serverless, where we pay only for the computing time and resources utilized.

We are implementing a serverless approach for our deep learning model using AWS Lambda, where our model actually performs computation for the given image.

3. AWS API gateway (Part - 3)

To make our deep learning model available for the outer world, we can use an API gateway to create a connection for our model to interact with others.




1. AWS Elastic File System (Part — 1)

Amazon Elastic File System (EFS)

Alright, let’s get started with the creation of AWS EFS and it’s mount points.

1. Before creating an EFS, we need to configure a new security group in AWS to access our EFS via AWS Lambda & EC2. So I hope you have already logged in into your AWS account and navigate to Security Groups present under the EC2 tab.


2. Create a new security group.





Here I have added 2 Rules.

Rule 1: To connect our AWS EFS using IPV4 & IPV6 Protocols.

Rule 2: To mount our EC2 instance and then we can add/replace the files(Python packages) into our EFS. Under the Source drop-down choose the Security group which is attached to your desired EC2 instance that you wish to connect.

Please note if you have other EC2 instance running which are connected to different security groups are not able to mount your EFS drive, unless you add their security groups in the inbound rules as we did above.

In case you don’t have an EC2 instance to add here, nothing to worry about. Check my post on Step by step creation of an EC2 instance in AWS and access it via Putty & WinSCP, and create an EC2 instance or do it later and come back to the security groups and add a new entry in the inbound rules here.


Leave the outbound traffic as it is and click Create Security Group.


3. Great, now let’s create an Elastic File System (Typically EFS is like a Google Drive), where you can vertically scale the volume without any limits and pay for what you use.




By default, you will be connected with a default security group, you have to remove them and replace them with the newly created security group ( In our case its → EFS_Lambda_Demo) and click the next step.


Add a new tag, by creating a key & Value pair, as we will be identifying our EFS using this tag later and leave the rest of the options by default and go to the next step.


Here on step 3 of configuring client access, use the above values to fill under the access points, and feel free to choose the value for Name and Directory. Here I have chosen the directory as /demo, so I will be adding all the dependencies later under this folder in the EFS.


And we are done now, click Create File System.


Wait until the Mount target state moves from Creating to Available state. Finally, we are done.



4. Creating an EC2 Instance and connect our EFS using the mount points and let’s add the python packages.

I am skipping the part of EC2 instance creation here, if you need a refresher, refer to my post on Step by step creation of an EC2 instance in AWS and access it via Putty & WinSCP and come back here.

Now let’s go the EFS page and see the instructions behind mounting our EFS drive to our EC2 instance. Now click the link “ Amazon EC2 mount instructions (from local VPC).


It opens a new pop-up window and since we are using an ubuntu-based EC2 instance we adhere to the commands which are bound to the ubuntu-based OS.

Connect your instance via Putty and run the highlighted commands.



Typically we need to run these 3commands. Please note the 3rd command is unique to the user.

C1 . sudo apt-get install nfs-common


C2 : sudo mkdir /efs (Note I am making a directory here, so add /efs)

Follow the commands listed in putty, to create a mount point in the root directory.


Subcommands :
SC1 : sudo su
SC2 : cd
SC3: ls

C2 : sudo mkdir /efs


Also checking with WinSCP tool, you can see a new folder “efs” in the root directory.

Also checking with WinSCP tool, you can see a new folder “efs” in the root directory.

C3 : sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-74048ea5.efs.ap-south-1.amazonaws.com:/ /efs

(Also note I am using /efs at the end instead of efs)


Great, now we have successfully mounted the EFS in the EC2 File system. Now let’s add some files into our EFS and check the update in size on the AWS EFS tab.


Here I have downloaded a NumPy python package in the efs folder and we can see an increase in the Metered Size in the EFS Console.


6. Now while experimenting I recently found out, when we stop and start an instance, the EFS drive gets unmounted and I ran the C3 every time to connect my EFS. So one workaround for this is to add the C3 in the user data field present under the instance settings.



So hereafter we don’t have to worry about connecting our EFS every time.

Alright once you have successfully implemented this part, now you can follow the Post 2 & 3.

It is best practices to check the pricing of EFS : https://aws.amazon.com/ec2/pricing/

Until then, see you next time.