Based in Milan, Italy, Diary of a Digital Engineer is a blog by Stefano Sandrini. Here I post about my journey as a software engineer and solution architect, new ideas, thoughts and quotes by influential architects, engineers, and digital innovators.

Run shell commands on a EC2 from a Lambda function

Today I was stuck finding a solution on a very specific problem: find a way to execute one or more shell command on a Linux EC2 on AWS  based on a particular event, let's say a file upload on a S3 bucket.

As you probably already know, you can create trigger in respond to events on several AWS services, S3 included (of course :) ). Usually you can bind these events to the execution of a Lambda function or a notification message on a SNS topic or SQS queue. Unfortunately,  you can't directly bind these triggers to an EC2 resource. So, how you can solve this problem?

The typical messaging pattern suggests to use an SQS queue, so you can benefit of all the features of real messaging queuing system. Of course, for complex operations and/or in context where reliability is the key, this is the correct choice. But what about a simpler situation?

Entering Lambda functions

For a much simpler context, you can use a Lambda function that connects to your EC2 trough SSH and execute the command. We can choose the node.js environment and use a simple package called "simple-ssh", that enables our function to execute and run a sequence of command. Check this example: 

var SSH = require('simple-ssh');
var ssh = new SSH({
host: 'localhost',
user: 'username',
pass: 'password'
});

ssh.exec('echo $PATH', {
out: function(stdout) {
console.log(stdout);
}
}).start();

So, going back to my example, we should create a Lambda function, triggered by an S3 event and use simple-ssh. Let's start by creating the Lambda function:

  • first of all, go to your AWS console and select Lambda from the services list
  • click on "Create a Lambda function" and select the "Blank Function" Blueprint
  • now you can choose a trigger for your function, choose S3 (see image1  below)
  • select your bucket, the event type (e.g. : Object Created (All) ), an optional prefix and suffix 
  • don't forget to check the "enable trigger" checkbox

 

 

Image 1: select S3 from the AWS service list to define a trigger

Image 1: select S3 from the AWS service list to define a trigger

Image 2: select one of your buckets from the list and the "event type". In this example I chose "Object Created (All)".

Image 2: select one of your buckets from the list and the "event type". In this example I chose "Object Created (All)".

Ok, now that we have defined our trigger, let's move on with the Lambda. Our function will receive, as event, a list of Records containing data related to the S3 object. You can have an example of the JSON you'll receive by configuring test event for your Lambda function and selecting the "S3 Put" example. Here's a snippet:

{
"Records": [
{
"eventVersion": "2.0",
"eventTime": "1970-01-01T00:00:00.000Z",
"requestParameters": {
"sourceIPAddress": "127.0.0.1"
},
"s3": {
"configurationId": "testConfigRule",
"object": {
"eTag": "0123456789abcdef0123456789abcdef",
"sequencer": "0A1B2C3D4E5F678901",
"key": "source/domus/HappyFace.jpg",
"size": 1024
},
"bucket": {
"arn": "arn:aws:s3:::mybucket",
"name": "sourcebucket",
"ownerIdentity": {
"principalId": "EXAMPLE"
}
},
"s3SchemaVersion": "1.0"
},
"responseElements": {
"x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH",
"x-amz-request-id": "EXAMPLE123456789"
},
"awsRegion": "us-east-1",
"eventName": "ObjectCreated:Put",
"userIdentity": {
"principalId": "EXAMPLE"
},
"eventSource": "aws:s3"
}
]
}

As you can see, our Lambda will receive data, among others, about the S3 object's KEY and size and the Bucket's name.

We can now write our handler to manage the event:

const bucket = event.Records[0].s3.bucket.name;
const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' '));
const region = event.Records[0].awsRegion;

/* -- the s3 aws cli command to launch via ssh in your EC2 -- */
var s3FileCommand = 'aws s3 cp s3://' + bucket + '/' + key + ' ./' + key + ' --region ' + region;

/* -- create SSH object wit the credentials that you need to connect to your EC2 instance -- */
var ssh = new SSH({
host: 'YOUR_EC2_IP_ADDRESS',
user: 'YOUR_EC2_USER(ubuntu/ec2-user)',
key: fs.readFileSync("your_ec2_keypair.pem")
});
/* -- execute SSH command -- */
ssh.exec('cd /myfolder/mysubfolder').exec('ls -al', {
out: function(stdout) {
console.log('ls -al got:');
console.log(stdout);
console.log('now launching command');
console.log(s3FileCommand);
}
}).exec('' + s3FileCommand, {
out: console.log.bind(console),
exit: function(code, stdout, stderr) {
console.log('operation exited with code: ' + code);
console.log('STDOUT from EC2:\n' + stdout);
console.log('STDERR from EC2:\n' + stderr);
context.succeed('Success!');
}
}).start();

In our handler we start by retrieving the object's key and the bucket's name; also, we retrieve the AWS region we need to pass to the AWS CLI S3 command we'll launch on our EC2.
Then, we build the S3 command to launch. You can find more info on AWS CLI command for S3 at this page.

Before executing the command we need to setup the SSH object, by passing as parameters all the credentials needed to connect to the EC2 instance. As you can see, you'll need to pass the .pem keypair file data and that's why you'll need to put this file in the Lambda's ZIP package. Also, please remember to set the correct file permissions to allow the Lambda container to access and read the file.

As last operation, we launch a chain of commands via ssh:

  1. we go to a specific path with the cd command
  2. perform a ls command, in order to have in the console the list of file at that specific path
  3. we launch the s3 command we built earlier in the code

As you can see, you can log in the Lambda console the output of all the steps performed, and you will be able to see those logs in Cloudwatch.

Security

Ok, cool! But... what about security and secure access to your AWS EC2 from a Lambda? 

First of all, you must configure your Lambda to be executed in the same VPC of your EC2. You can do this by opening the "Advanced Settings" item in the configuration page, as shown in the image below.

Image 3: selecting the appropriate VPC, subnets and SecurityGroups for your Lambda

Image 3: selecting the appropriate VPC, subnets and SecurityGroups for your Lambda

In the same page, remember to select at least two subnets in your VPC in order to have the correct capacity. Also, you must assign the function to a SecurityGroup that is allowed to access your EC2 instance via SSH. As a minimal configuration, the Lambda's SecurityGroup should allow Outbound connections towards the EC2 instance's security group on port 22 and the EC2 instance's security group must allow incoming connections from Lambda's security group on port 22. Last, but not least, ensure your routing configuration allows the EC2 instance to be reached from the subnets you selected to host the Lambda function.

 

Ok, that's it. I hope you found this case useful! See you next time.
Ste

Deploy Parse-Server and Parse-Dashboard on GoogleCloudPlatform with Container Engine

AWS Search Engines