The last time I was running bare Lambda function. It was fun, but no persistence was touched. Now if we are going to track real expenses we have to persist data somewhere. One of the solutions made for that is to use DynamoDB table.
Following steps will help you to repeat the process of creating table. For sake of simplicity I decided to create one table called operation.
The only constant you need to provide is future ID of table, I have chosen what follows.
As DynamoDB is kind of NoSQL database, all other fields will be added on runtime.
Lambda can write to DynamoDB
The main goal is to provide Lambda function ability to read/write „rows” in table. In Python it is as easy as using boto3 library providing IO operations. Following you have example of writing an item
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('operation')
new_record['OperationId'] = str(uuid.uuid4())[0:6]
item = new_record
Item = item
Because AWS is so secure-aware environment you can never forget that now our function is not yet ready to modify real table. If we are going to run it now we will receive
What we need to add is to assign correct policies to our user invoking the method. Here you have example what I have added. Built-in
AmazonDynamoDBFullAccess seems to be enough to provide full access to your role/user.
Let us run all of these machines. Every time I change my code I am helping myself with scripts I have prepared.
redeploy.sh is used to move code to AWS server.
invoke_lambda.sh is used to run function. After running it with parameters I see entries in DynamoDB.
Lambda using Python with boto3 library allows me easily to deal with AWS persistence – DynamoDB!