AWS Serverless with AWS Lambda, API Gateway & EventBridge
source: https://www.youtube.com/watch?v=5rG-YgTHMC8
What is Serverless¶
Hello, and welcome to AWS Serverless for Beginners. In this course, you're going to learn all about serverless services on AWS. Now, serverless basically means that the compute services, the application integration services, database services, and a whole series of different functionality is delivered to you without you ever having to worry about managing underlying servers, like Amazon EC2 instances, operating systems, patching — all those things we used to have to do. So, serverless has become extremely popular. Most of the big companies in the world, and many of the fastest-growing startups, are leveraging serverless because it has a very low cost base, it's very fast to implement, and it enables rapid innovation at scale with minimal cost. So, in this course, I want to introduce you to some of the core services in the serverless category on AWS. We're going to look at compute services like AWS Lambda. I'll cover some of the core application integration services like Amazon SQS — the Simple Queue Service, the Simple Notification Service (SNS), and Amazon EventBridge. These services enable you to build distributed, decoupled applications and bind them together with a variety of different functions for storing information or sending information between components of your application. I'll also cover Amazon API Gateway and show you how to build REST APIs in the cloud. Now, it's going to be very practical. I'm going to show you how to actually deploy AWS serverless services. I'm going to provide you all the code you need, and I'm going to show you so that you can actually follow along and start building on AWS. The great thing about serverless is we can get going super fast. We can start building and deploying applications with great functionality in a really short time frame. So, I think you'll have a lot of fun in this course, and I hope it will be very educational for you and get you excited about deploying serverless applications on AWS.
Serverless Services¶

Hey guys, in this lesson I'm going to cover serverless services and an architectural pattern known as event-driven architecture. With serverless, we don't need to manage instances, so we've moved away from the model where we need to launch EC2 instances and then manage them. And now, we can spend more time on innovation. You don't need to provision hardware — of course, that's all done for you. AWS are managing the underlying infrastructure, and there's no management of operating systems or software. So, with serverless, both the operating system and the application are deployed for you, and you're not worrying about things like the scalability of the underlying infrastructure or high availability. All of that is taken care of for you. Capacity provisioning and patching is also handled automatically. All of that becomes abstracted away, so you just need to bring your code or deploy the service and configure it in some way. Automatic scaling is provided for you with high availability, and it can be very cheap as well. Services like Lambda are extremely cost-effective to use.

So, let's have a look at serverless with event-driven architecture. That's a pattern where we can utilize services in such a way that an event in one is going to trigger an action in another service.

Here, we have a static website, and a user is accessing that static website running on S3 (Simple Storage Service) and performing some kind of action. In this case, the user is uploading a file through the static website, so the file ends up in an S3 bucket. Using a bucket configuration known as an event notification, we can configure it such that when an upload happens of an object, S3 is automatically going to notify a Lambda function. The Lambda function is then able to take that file that's been uploaded to the S3 bucket and process it in some way. So here, the serverless function is processing the file and then maybe it's storing the results in another S3 bucket. Also, that Lambda function may send a message to an SQSQ. Remember, queues are used for decoupling. So now we have a message in a queue, and that message can be picked up by another service. Now, a couple of ways: one is that the service — that function or instance, for example — can poll the queue. Or, through event-driven architecture, the queue can trigger the function to say, "Message has been received, I know that I need to send it to a particular Lambda function." That function might then process the information in that message and store the results in a DynamoDB table. So here, we have an event-driven architecture. The Lambda function might also use an SNS topic to send a notification via email. All of this happens automatically. The user simply uploaded an object to a website. Everything else happened automatically — it was event-driven. So, an event that happened in one service triggered an action in another service, and here we have a whole chain of those actions taking place.
AWS Lambda¶

Okay, so in this lesson, we're going to cover AWS Lambda.
Lambda is perhaps the most well-known serverless service. With Lambda, we have functions. A Lambda function runs some code in response to some kind of trigger.

Here, we have a developer. That developer has packaged up some code in a zip file. We can use zip or W file formats. The code is then uploaded to the function. In this case, it's a simple Python script that runs through and logs an event to CloudWatch Logs. So, we got some Python code. Now, the Python code has been added to the function. At this point in time, nothing is happening, and we're not spending any money. We've created a function, but it's not actually running, so we're not paying anything. Then, Lambda functions are invocated based on events, and then the code is executed. So, in this case, some kind of event, some kind of trigger, is causing the Lambda function to execute the code. Pricing is based on memory assigned and the duration of the function execution. So, depending on the amount of memory you assign to your function, it allocates a proportional amount of compute power — of CPU — and you pay based on the amount of memory you assigned and how long the execution of the function lasted. In this case, this function is simply writing an event to CloudWatch Logs.
With Lamba, it supports many different languages such as Java, Go, PowerShell, Node.js, C, Python, and Ruby code. The execution role is an IAM role that provides the function with permissions to access other AWS services and resources. So, for example, in the previous diagram, the Python code in the function was writing an event to CloudWatch Logs. That's only going to be possible if the function has permissions to CloudWatch Logs. So, it's really important to make sure that your execution role for your function has the relevant permissions it needs. For example, if you want to write some message to a queue, or you want to add an item to a DynamoDB table — all of that is going to require permissions that are assigned through the execution role. For monitoring and logging, AWS Lambda will integrate with CloudWatch, so you can see performance metrics as well as logs. For memory and timeout, you can specify the amount of memory allocated to a function, and, of course, that dictates the price, as I mentioned before, and the maximum execution time. The maximum execution time is 15 minutes for a function, so you might have your code only running just for a few seconds, or you can run it right up to 15 minutes.
Lambda Function Inovation¶

Let's have a look at function invocation. So, the function runs in response to events from various services through AWS or direct invocation from the AWS SDKs or the API. Functions can be invoked either synchronously or asynchronously. With synchronous invocation, applications actually wait for the function to process the event and return a response, so you see a success or failure response. With asynchronous invocation, Lambda accepts the event for processing and then just returns a response immediately, so you don't really know the outcome of that function invocation. Lambda will scale horizontally by running multiple instances of a function in parallel, up to the concurrency limit. So here, if a function is invocated, it's going to execute. Here, we can see that it's already got two invocations that have come in that are running in parallel to each other, and then more can occur as well. So, here we have lots of different invocations of the function running in parallel to each other, and we can run many of these right up to the concurrency limit.

If we want our Lambda function to access resources within a VPC, we can connect the Lambda function to the VPC. So, remember that functions are regional, and they don't have access to a VPC by default. So, what we need to do is connect to the VPC, and what will happen here is we'll define the subnets we want to connect to, and Lambda is going to create an Elastic Network Interface (ENI) in those subnets. Now, we do of course here need the function to have the relevant permissions for EC2 to create the ENI, so that has to be in the task execution role. You must select the VPC, the subnets, and the security group that those ENIs are going to utilize. The function role needs to have the relevant permissions, as I mentioned before, and that gateway is required for internet connectivity. So, maybe your Lambda function needs to access these applications in the private subnet, but maybe it also needs to check on the internet, call an API, download something, or some other operation. In that case, you need a NAT Gateway for the function to now be able to route out of the VPC and back out to the internet. So here, we've deployed a NAT Gateway in a public subnet, and now our Lambda function is able to access the internet via the NAT Gateway and the internet gateway.
Create an AWS Lambda Function¶
Hey guys, in this lab, we're going to work with the serverless service AWS Lambda.
We're going to create a function, and we're going to see how we can trigger executions of that function in a couple of different ways. In the course download, you will find the AWS Lambda folder and this working-with-lambda.md file, so make sure you've got this code ready.
What we're going to do first is we're going to go and create a Lambda function that logs a message to Amazon CloudWatch Logs, and we have some Python code here. So, what you'll want to do is copy all of this Python code from line 7 to 24. Now what this code does is it simply writes an event to CloudWatch Logs with a message, and so we can write a customized message, and it's going to appear in CloudWatch Logs in one of the log files.
So, back in the console, I'm simply going to search for Lambda, and there it is
— Lambda, run code without thinking about servers. Great little tagline. So,
let's create a function in Lambda, and here what we can do is we can author
from scratch, choose a blueprint, or you can even use a container image. Well,
we're going to author from scratch, and I'm going to give this one a name, and
I'm going to call this writeToCloudWatch. Next, we need to choose the
runtime. We've got several different options, as you can see. So, what we're
going to choose is we're going to use the latest Python. In fact, the latest
would be up here — Python this video, it could be different. Shouldn't matter
with this particular code sample. So, I'm just going to choose one of these
runtimes. Next, you've got the option for the architecture. We leave that on
x86_64. Permissions are really important to understand with Lambda. If you want
your Lambda function to be able to interact with any other AWS services, you
will need to supply permissions. Now, by default, it's going to create a new
role with basic Lambda permissions, and those permissions will provide access
to CloudWatch Logs. So, I can just leave it as it is. Alternatively, you can
use an existing role or choose from various policy templates, which you can
find down here. So, I'll just leave it on the default to create a new role with
basic Lambda permissions. We then have advanced settings. There's a few options
here — we don't need to change any of those for now, so we'll just leave those
selected or deselected at the defaults, and let's create the function. Once the
function's created, you're going to end up on this page here. What we can see
is a function overview. We can see our triggers, if there are any. We can see
destinations — we're going to go over these in just a moment. Down a little bit
lower, we can see the code editor, and it's just written some default Lambda
code in here — the "hello from Lambda." What I want to do is delete that, paste
in the code which we got from the document in Visual Studio Code, and then
click on Deploy. So, that's going to deploy my update. Now, along the top,
here's a couple of really important places. One is Configuration. If we click
on Configuration, we can see there's lots of options here. For example, the
default timeout for our function is 3 seconds. We can increase that if we wish
to. We've got memory here — 128 megabytes by default. What I want to show you
is down in Permissions. So, Permissions — you can see there's a role here, and
that role has CloudWatch Logs permissions by default. If you need to add
permissions to another service, you can always go ahead and edit the role.
Also, on the Monitor tab — this is where we can click on CloudWatch Logs once
we've executed our function. I'm going to use the test utility here, and what
we're going to do is we're going to create a test event.
So, I'm going to copy this JSON, and back in the event JSON here, let's paste this in. Just call this "my event," and then I'm going to save and then run a test event. What we see at the top here is a nice green banner which is saying that it succeeded, and we've got a response here which says "message logged successfully." A bit lower, we can see some more log output, and there's some details here in the summary as well — so, how long did it take to run, because you get charged based on this per millisecond, which function version was used — $LATEST is the editable version of the code that we just, where we just entered our code a few moments ago. You can then publish different versions as you make changes as well, and we can see the log out here. So, this should be in CloudWatch Logs, so you can click here, or remember, you can come up to the Monitor tab and then click on View CloudWatch Logs here. So now we can see this log stream. I'm going to click on the log stream, open up the log stream here, and we can see our message: "hello CloudWatch." And there's a few events related to that individual execution, telling us when it started, when it finished, we've got a report, and then we have the information in the middle there. So that all worked nicely.
If I come back to the file here, what we want to do next is see how we can do the same thing from the command line interface. What we're going to do is use this test data. We need to create a file called payload.json, and then we're going to run this command to invoke our function. So first, I'm going to copy this JSON data. Let's open AWS CloudShell, and I'm going to create a file called payload.json, paste this data in, and then simply save that file.
So now I have my file. Next, we want to run this command, but we need to update our function name — mine's called write-to-cloudwatch. Next, you'll want to copy this entire command from where it says aws down to the letter n after json. Now, the --cli-binary-format raw-in-base64-out flag indicates that it's a binary format, which is what we want here. So the --cli-binary-format with raw-in-base64-out is not a typo — don't edit that out. So then you want to paste this in, and let's try and invoke our function. Now, we have a status code 200 — that is a success code — and it does say the $LATEST was executed. That's the latest version of our function. So what I want to do is come back to CloudWatch here, and let's just see if it's created a new log stream. No, it's the same log stream. So let's just expand, and what we should see now is we should have an additional event where it says "hello from CLI." So that's worked — we've now executed the function both using a test event in the console, and then we've used the command line interface with the aws lambda invoke command to invoke our function. Next, I'll show you how we can trigger the function using an event notification from Amazon S3.
So what we're going to do now is we're going to upload objects — so, files — into an S3 bucket. And when we do that, an event notification in S3 is going to let Lambda know that something's happened, and Lambda is going to pick up the name of the file and associated metadata, and it's going to put that into CloudWatch Logs. So we have a slightly updated version of our code here. What we want to do is copy all of this code, and back in Lambda, let's go over to the Code page, delete all of this code, paste the new code in, and then deploy. And so now we've updated our code. Now, we will need permissions to Amazon S3. So I'm going to go over to Configuration > Permissions, click on the role name, and for the permissions policies here, all we have is basically access to CloudWatch Logs. So what we're going to do is attach a policy. If I type S3, I need read-only access, and then let's add the permissions. Okay, so now the function has permissions to read from CloudWatch Logs. So what we can do — we've got the code installed — let's add a trigger. So if we add trigger and choose source S3, now we need a bucket. So, make sure you've got a bucket. Let's have a look at what I've got and what's in my buckets. So, if I head over to S3, I've got a few from previous exercises. Anyone will do, maybe this one, "my-bucket." Okay, that's got a couple of documents in it. It doesn't matter if there are things in there already, and it doesn't have to be publicly accessible. So, you could just create a new bucket. In fact, let's do that so everyone's doing exactly the same thing. So, this is my event notification test. Bit of a long name to make it unique, and we don't need to make any changes whatsoever. So, we'll create the bucket, and then we can come back to triggers here. I'll give it a refresh, so it should pick up my new bucket. Next, we're going to leave the default event types here because this is what we want: object create events. I don't need to set any other settings here. I'm just going to acknowledge this warning here and click on "ADD." So, now we can see that we have the trigger. If we go back to the bucket and click on the bucket, and go to Properties, scroll down a little way, we'll see that an event notification has been created. So, you can create this configuration through this console as well. So, now what should happen is if we upload objects to the bucket, it should execute the code automatically via the trigger, and then we should see an event in CloudWatch Logs that includes the name of the file we uploaded. So, I'm just going to go and upload a file. Let's choose one of these ones, "my-pineapple," for example, and click on "upload." I'm going to close. Let's do the same again. Add a file. This time, I'm going to add my "papaya" file. You can add anything you want. It doesn't matter what type of document. So, what we need to do now is go and check if this worked correctly. So, I'm back in Lambda. Let's go to "Monitor." In fact, I've already got CloudWatch Logs open, but don't forget, you can always come and find it from the Monitoring tab. Now, in CloudWatch Logs, I'm going to go back up a level because I suspected it would create a new log stream, which it has. So, if I click on this log stream and then expand, we can see there's a whole bunch of metadata associated with the file and the event. It's an object-created PUT API action. We got a source IP address. We've got information, including the bucket name here and the key. The key is, of course, the file name, but now our code specifically prints out the file name as well, so we can see that right here. And you can see a couple of different entries where two different objects were uploaded to Amazon S3. So, now we are not just manually triggering it through the CLI or the test event, but we're also having it automatically triggered through an event notification as well. So, that's it for this lab. Feel free to clean up those resources. You can simply delete your function and your bucket, but you won't be paying for Lambda functions when they're not being used, and you're not going to go through the free tier with a few test events like this anyway. And S3 does have quite a good storage limit, uh, for the free tier as well. Welcome back! In this lesson, I'm going to cover application integration services. So, as the name suggests, these are a set of services that are used for integrating different components of our applications, whether they're server-based or serverless. So, let's have a look at an event-driven architecture where we utilize some application integration services. Here, we have an e-commerce website. The user places an order through the website. The website, in this case, is running on EC2. Now, this event—the order being placed on the website—causes a couple of things to happen. Firstly, EventBridge is triggered. So, a rule in EventBridge is going to route some information through to the Simple Notification Service to notify the customer that the order has been submitted. So, here, SNS is sending the email out. Meanwhile, the information about the order is placed into an SQS queue. So, the Simple Queue Service is being used here. A message goes into the queue with some information about the order. A couple of things can happen here. Either the act of the message being placed in the queue can cause a trigger, which is to trigger Lambda, or you can configure Lambda to constantly poll the queue and check for new messages. In this case, the SQS queue is configured to trigger Lambda, and Lambda will then store some results of the order in a DynamoDB table. So, that's an event-driven architecture where we're connecting things together with application integration services, in this case, EventBridge, SNS, and SQS. Once the order has been completely fulfilled, Lambda might also use SNS to send another email to the customer notifying them of the order fulfillment. So, let's get started with a bit more detail about SQS. Here, we have a direct integration between two components of an application. Now, this is fine for some use cases, but there is a problem with this, and the problem is that we have these two auto-scaling components of the application. What if one of them can't keep up with the workload? So, maybe the web tier is receiving a large number of orders, and the auto-scaling group on the back end, the app tier, needs to scale. That takes some minutes. During that process, there could be some orders that get lost if you have this sudden spike in activity. That's a problem. So, what we can do instead is put an SQS queue in the middle. Now, the EC2 instance is polling the queue. So, that's the app tier checking for messages that have been placed by the web tier. Now, it doesn't matter if the web tier suddenly gets a huge spike in orders and that information just gets put into the queue. In this case, the app tier can process those orders when it's ready to, and if that requires some scaling and that requires a bit more time, that's better than losing those orders completely. So, here we have what we would call a decoupled integration. Let's have a look at SNS. This is the Simple Notification Service. The difference is, SNS is more of a push-based service. It's a publisher-subscriber model. We create something called a topic. We then have subscribers. So, for example, for an email, it could be something like a newsletter. You could have one or thousands of people subscribing to your newsletter, and when you send it via the topic, that email goes out to all of those different subscribers. But likewise, it could also be Lambda, could be SQS, could be some other web application. There's a variety of different targets who can be subscribers for a topic. So, the information is then sent to those subscribers every time a message gets placed into the topic. Again, we can pass information from one component of an application to another, and it supports various transport protocols like HTTP, email, JSON, and SMS, to name just a few. So, this is another example of decoupling. This is more of a publisher-subscriber model, and it's more push-based. SQS is more pull-based. So, typically, instances or Lambda functions are polling the queue and checking for messages. And with SQS, it's more one-to-one. One function or one instance is processing a message, and that's done. In this case, one message goes out to many subscribers. EventBridge supports many different event sources. Can be AWS services, custom applications, or SaaS applications. Some event happens in those sources, and that information is then sent through to EventBridge. The event bus, based on rules that we create, is then going to forward that information to various different targets. They can be, for example, Lambda functions, Kinesis data streams, SNS topics, and so on. So, there we have targets. So, EventBridge is a really useful tool. There's so much power that we can do with EventBridge, and it has lots of built-in integrations for AWS services and rules that we can create very easily through a graphical wizard. So, let's just compare some of these application integration services. SQS—I've just gone into a fair amount of detail there. That's a message queue, so it's what we call a store-and-forward pattern, and it's used for building distributed and decoupled applications. SNS is a notification service. So, this is pushing notifications like emails or just information from one application component to another, but the endpoints are the subscribers, and there can be many of them. Step functions is useful for coordinating and orchestrating workflows. So, for example, you can pull together lots of different Lambda functions into an orchestrated workflow with some logic about how those functions are executed. Amazon MQ is similar to SQS in that it is a message bus, but it supports the open-source Apache ActiveMQ and RabbitMQ. So, if you're using those industry-standard APIs and protocols, many users or many companies will use these on-premises, then migrating into the cloud is easier if you want to keep using those same tools for your message buses. So, that's where Amazon MQ comes in. And then lastly, EventBridge—a serverless event bus for connecting lots of different application components together. Now, these are some of the core application integration services. There are others, of course, on AWS. Hey guys, welcome to this Hands-On lesson. In this lesson, we're going to start building a serverless application. This serverless application is going to have a REST API, and that's going to be part of the web front end so that we can actually access our application from the internet. Now, this is part one of two parts, because we're going to break it up as we need to go and learn a few more concepts before we finish this lab off. So, basically, what we're going to end up with when it's complete is a static website, and we can submit orders through our browser. So, the static website is a simple order submission. Imagine it's an e-commerce application. That order will then be forwarded through to a REST API running on Amazon API Gateway. The API will proxy the request to a Lambda function. That means it just forwards it through without actually modifying it in any way. The Lambda function will then take that order information that's been submitted and place it in an SQS queue, so that a different function—and that function is the processing function—can process the message and place it in a DynamoDB table. So now, the order information ends up in the DynamoDB table. So here, we've separated with a queue, so of course, we're decoupling. That means if the processing, for example, takes a lot longer, then it doesn't impact the orders coming in and getting placed into the queue. So the final result is we have a record of the order added to the table, and we'll be able to see that. Now, we haven't learned API Gateway yet, so what I want to do is just show you the pieces that you have learned. So, we're going to build this part of the application, and rather than have the API front end, we're actually going to submit a request via the CLI and via the Lambda console as well, so that we still end up with a message added to the table. The whole application works except the web front end, and the API will be added on later on. So, we're simply going to be submitting our test events via the console and invoking our Lambda function via the CLI. Okay, let's go and build this out. In the course download, in the serverless app directory, you'll find a few files: the serverless app instructions markdown file, the index.html, and then two Python files, which are the submit order function code and the process order function code. So, we're going to use the instructions here. The first thing we need to do is create a Lambda function. We're going to call it submit order function. We're going to use the Python 3.9 runtime, and we're going to add the code from the submit order function.py file. We also need to provide SQS full access permissions to the execution role. So, I'm going to head over to AWS Lambda, let's create a new function, provide the name submit order function, and it is important that you follow the naming. And I'm going to choose Python 3.9 and create that function. Now that the function is created, let's go and grab our code. So, we're going to go to the submit order function.py file, and I'm going to select all and copy this function code. Now, what we have here, we've got some Python code. We can see that on line five, we have the SQS URL. So, once we create the queue, we're going to come back and edit this here to add in the URL of our SQS queue. What this is basically going to do is this function is going to receive some order information in a specific format, and then it's going to place that information into the queue. So, back in my function, I'm going to delete all of this code and then paste in my code. I also need to go ahead—now, I haven't deployed yet, by the way, because we're going to add our queue URL in a moment. So, let's go to Configuration, Permissions, and I'm going to click on the execution role name here. That will open up in the IAM console. In here, I can click on Add Permissions, attach policies, and type SQS. We're going to provide SQS full access and then add those permissions. So, now the function has the relevant permissions. Let's go ahead and create our queue. That's going to be the next step. So, I'm going to open the Simple Queue Service. Back in the instructions file, we do have a name for the queue here: product orders queue, and we're going to copy this into line five of our submit order function code. That's all we need to do with the queue in SQS. Let's go and create our queue. We're going to use Standard, we're going to supply the name for the queue, and that's it. We don't need to change any other settings. All the defaults are fine for this particular use case. So, once you've clicked on Create, you'll then see that we have the URL here. I'll copy the URL, come back to my submit order function, head across to the Code tab, and then here, we're going to just paste in this URL and then deploy the function code. So now we can actually test that this works. What I want to do before I have the next function processing messages straight from the queue, I want to show you that the message ends up in the queue. So, what we're going to do is we're going to submit a test event with this data. You can change the test product to something else if you like. So, I'm just going to copy this JSON code and back in my function here, click on Test, create a new test event—Test Order—and then I'm going to paste in my JSON code. And then all I need to do is save and then run test. Now, it looks like it's done the job. We can see here, we get some responses, and it says, "Order submitted to queue successfully." Now, let's head across to the queue, and I'm going to click on Send and Receive Messages. And in here, we can click on Poll for messages. You can see straight away it found a message. If I click on the message, we can actually see the body. Okay, so there might be some more information on the details there, but this is the information. And our second Lambda function is going to pick up information in this format and place it into a DynamoDB table, and this message will be processed as soon as we set that piece up. So, let's go back to our instructions and see what we need to do next. We've run our successful test, that all worked. Now, we're going to create our processing function. We're going to provide this name, it's going to be the 3.9 Python runtime again, and we're going to use the code from the process order function file. This time, we need both SQS and DynamoDB access. So, coming back to Lambda, let's head back up to Functions, Create a function. I'm going to supply the name, make sure that I choose the correct runtime—Python 3.9—and then create the function. While that's creating, I'll come back, go to my process order function here, and I'm going to copy all of this information, noting that I need to update my DynamoDB table once I've created it. Back in Lambda, I can remove all this code from the Lambda function here, add the new code in, and then let's go across to Configuration, Permissions, click on the role name, and we're going to add the permissions we need. So, what we need is SQS, because we need to pick up the message from the queue, and DynamoDB, because this function is going to place the information into the DynamoDB table. So, let's click on Add permissions. We now have our policies. So, what we need to do is come back to the instructions, and we're just going to check what the table information is going to be. So here we have "ProductOrders" and "orderId." Okay, so this is the name of the table, this is the primary key. Make sure you get the capitalization correct, because it is very important. So, I'm just going to copy-paste. So, we're going to come back to the console, search for DynamoDB. It's very simple to create a table in DynamoDB. We simply click on the Create table button, paste in the table name, I'll copy the primary key, paste in the primary key, and again, we can leave all of the defaults here. Create the table, and very, very quickly, the table should be ready for us. So there it's creating, and within a few seconds, it's become active. Okay, so now we have our DynamoDB table ready. Now we do need to copy the name of the table here, and what I want to do is come back to my function—process order function—here, click on Code. We need to go where it says "your DynamoDB table," paste in the name, and then deploy. Now, at this point, we have all the high-level components: we have the functions, we have the DynamoDB table, we have the queue. However, the only thing we haven't done yet is integrate the queue with the function. What we want is for the queue to trigger the Lambda function. So, back in SQS, I'm going to click on the queue to come back up to this level, choose Lambda triggers, and we're going to configure a Lambda function trigger. We're going to choose a function, and it's going to be the process order function, and then click on Save. Okay, so that's going to create. It says about 1 minute; it's usually a lot faster than that. Once that's ready, then we have the integration, so that every time a message goes into the queue, it's going to automatically trigger the function. So the function doesn't even need to pull the queue. In this case, the queue is sending a notification to say there's something waiting for you. So now the function will automatically process the message from the queue. So, there we go, that's enabled. Let's see if it actually took the information that was already placed into the queue, and it did. So there it is—my test product has actually been added to the table. We can also go ahead and test using the invoke function command from the command line as well. So, what we're going to do here is we're going to run a second test, and this one is going to be using the CLI in CloudShell. So, we'll open up CloudShell. We're going to create a file called input.json, and it's going to have test product 2 here, quantity 2. So, I'm going to open up the CloudShell console. I'm going to use Nano to create a file called input.json. I'm going to paste in my input there, and then we're going to come back, and what we need to do is run this invoke command. And of course, I need my function name, so I need to invoke the first function—that is, the submit order function. So, let's copy the function name, paste this in, and then we're going to invoke the function. The payload is in binary form, the file b:input.json, and then the output goes there to record the success. So, let's just copy the entire command, and back here in CloudShell, I'll just run this command to invoke the function, and we get a status 200, which is always good news. Come back, refresh, and now we have the second product that's added into our e-commerce table. So, at this point, we have the submit order function, we have the queue, we have the process order function, and the DynamoDB table. And we've submitted events from the CloudShell CLI as well as test events through the Lambda function console, and those are both ended up in the table. So, that's it for this lesson. When we pick it up again for part two, we're going to add the static website front end and the API Gateway so that we can publish this out onto the internet. Hey guys, in this lesson, I'm going to cover Amazon API Gateway. API Gateway is a really useful service, and we've learned a bit about APIs already. Of course, they are a fairly complex topic—I understand that for many new users—but I'm going to try and help you to understand API Gateway by showing you a few diagrams. And of course, we'll do a hands-on lesson as well. So, let's have a look at an example of what we call a REST API with Amazon API Gateway. So, imagine that in a company, you have built some kind of application that is based on microservices. And so, therefore, we have multiple components of this application. Here we can see we've got a Lambda function for the booking service, we've got a Lambda function for the payment service, and we've got ECS containers for the account service. So, we've got these three different microservices. On the left-hand side, we have a mobile application. So, users on the internet with their access via phone have their mobile application, and that mobile application needs to do things like viewing existing bookings for the customer, submitting new bookings, and using the payment service to accept the credit card and pay for the service, and then the account service where they might be able to look at previous orders, get some help, things like that. So, we've got these different microservices. Now, how does the application running on a mobile phone, connected via the internet, connect to these different microservices? Well, what we want is a single endpoint. So, we want a public URL that we can direct the traffic to—that's Amazon API Gateway. So, it provides the entry point, but also it knows how to actually forward the requests to the various different microservices depending on what the request is—looking at things like the URL path or what type of request, using APIs—is it a GET? Is it a PUT method? Is it a POST method? And so on. So, it's utilizing a REST API over HTTPS, so it's a secure connection over the internet, and the mobile application can be coded now to speak to the API endpoint. And the API endpoint can then proxy those connection requests through, or the actual information in the request through, to the various different microservices. So, how is it structured? So, in API Gateway, we have a graphical interface, which is nice and useful to create our configuration, and it looks something like this. Here, we have something called a resource. Okay, I've highlighted in the orange boxes what the resources are. Essentially, these are a path in the API. So, if you imagine that the API endpoint is a URL—so it's a publicly accessible URL—then we can configure paths that we add onto that URL. So, /booking for the booking service, /inventory for the inventory service, /payment for the payment service. Within each of those resources, we then create the methods. So, you can see those here, like GET and PUT and POST and DELETE. These are HTTP methods. So now, when we want to retrieve booking information, we're going to—or the application is going to—issue a GET request to the booking resource. And of course, we can then integrate each of these methods to our backend microservices, databases, and so on. And it supports a variety of different endpoints. We've got HTTP endpoints here, we've got Lambda, we've got ECS, we've got DynamoDB, and of course, it does support more than this. Let's have a look at another example to put this together. Here we have an S3 web—let's imagine this is a static website. We then have two different Lambda functions, one for sending support emails, one for logging support tickets, and then we have an API Gateway endpoint in front. So, here we've created the resources. One is /sendemail, one is /submitticket, and in both cases, there is a POST method. We can see now that when the customer submits a support request, the information can get emailed through to the support team, or they can submit a support ticket, and that gets logged into a DynamoDB table by the Lambda function. So, there's an example where we have a single endpoint. We then have the resources and the methods directing the request through to the different backend microservices. Welcome back. In this lesson, which is part two of our building a serverless application with a REST API, we're going to complete what we started in part one. So, in part one, we ended here. We used the Lambda function console to submit a test event, which went through our submit order function into the queue, which triggered our process order function, which placed the product into the table. So, it looks like we have an order from our e-commerce store. We also used the CLI. What we're going to do now is we're going to add a static website running on S3, and then we're going to add an Amazon API Gateway REST API. This time, from the internet using a browser, we can submit an order, and the static website will forward the request to the REST API, which then proxies the request through to the Lambda function. So again, proxying just means that the API is not performing any kind of transformation of the incoming information into the format that's required for the backend. In this case, the backend—the Lambda function—does is able to pick up the information in the format that comes straight through from the static website, from the form, and process that information. So, we just proxy it straight through. Okay, let's head back over and finish this lab. Back in the instructions file, I'm now down on line 68 here, the part two. And what we're going to do first is create the API. The API is going to be called Product Orders API. So, let's go ahead and create it, and then we'll come back and look at some of the configuration. Let's go to the API Gateway console. Here, we're going to, from the main screen here, just scroll down until you see REST API—not the private one, this is the public one—click on Build New API. Provide a name. By default, the endpoint type is Regional. We don't want Edge Optimized or Private, so let's create the API. Now, it puts us into the API configuration. What we need to do here is create resources and methods. The first thing we're going to do is create a resource, and that resource is going to be orders. So, the path is slash, and then we have /orders. Click on CORS, which is Cross-Origin Resource Sharing. We're coming from another website—we're coming from a static website running on S3. That's where the request is going to come from, to the API, so we need to enable Cross-Origin Resource Sharing to allow that to happen. So, what we can do then is simply create the resource. So, where we are now is we've done this step here: Step 2—create the orders resource with CORS enabled. Then, we need to create a POST method for /orders that's integrated with the submit order function. Once we've done that, we need to enable CORS again, so we have to run some additional settings, select all the options, and then deploy to a new stage. So, when we deploy our API, we actually deploy it to a stage, and it provides a name that's part of the URL, and we need the API endpoint that includes the stage name. You'll see that when we get there. So, firstly, let's come back. What we need to do—we have the /orders selected, so we're under resources /orders. I need to click Create Method on the right-hand side. We're going to choose POST. So, these are all the various different HTTP methods. We're going to choose POST because we're posting a request—it's essentially uploading some information. Lambda function is the integration. It is going to be a proxy integration, as I've explained before, because we just send the information in the current form straight through to our submit order function. So, make sure you choose the right function here. That's all you need to do. Here, we can simply create the method. Okay, so now the method has been created. All we need to do is head back up to /orders, click on Enable CORS, and then select all of these different options and then Save. So again, we're just making sure that we have the CORS settings enabled because the request is coming from another website. Once we've done that, we can deploy our API. We deploy the API to a stage. We're going to create a new stage, and the stage is going to be called prod, all lowercase, and then click on Deploy. Now, we have an Invoke URL. Okay, so note that the Invoke URL has a specific and unique identifier for your API, and on the end it has /prod. Okay, that was the stage that we deployed to. All of that's important information. So, we want to copy the Invoke URL. Now, back in the instructions here, it's telling us to update the Invoke URL in the index file. And the note here is that it should end up looking something like this, where we have / and then /orders on the end as well. So, make sure you do that. Let's go to line 32 of our code here, your API endpoint. Paste it in. We've got the /, so then I just need to type /orders. Okay, so that—remember—that's the resource. The resource is part of the path; it's part of the URL path, so we have to add it on here for the request to work. Once you've done that, don't forget to save your index.html. And what we want to do now is come back and create our static website. I've already got a couple, but I'll do it from scratch. So, let's call this my-api-static-website- and then just make it unique. I do want to enable public access for this one. Let's acknowledge that. I'm going to do that. Just create the bucket. We will choose our bucket, my-api-static-website. I'm going to go to Properties, all the way to the bottom—Static website hosting—Enable. Index document is index.html, and then, of course, we want to scroll down and Save changes. I've got the bucket policy code that you need here. So, what we want to do is just copy this code to allow public read of the objects. So, back in our bucket, we can go to Permissions under Bucket Policy, Edit, paste the code in, and we need to, of course, update our bucket ARN here, making sure the /* is still in there, and then Save the changes. Now, lastly, let's come back, and we need to upload our file. So, I've uploaded my index.html. There it is. I can close out of here. I've got the file. Let's go to Properties, and I can click on the static website endpoint. So, now I can enter a product name—let's say it's going to be an iPhone—and I want one of them. And then let's submit, and it says, "Order submitted successfully." If I now come back to my DynamoDB table, let's refresh to scan the table, and there is my iPhone. So, that is it working correctly. Now, I'll show you a couple of places where things can go wrong. Firstly, the index.html—I already mentioned it—but make sure you've updated your endpoint and that you've got /orders in there on the end of your endpoint. So, it should look very much like this one. The other place things can go wrong is back in the API. So, again, I keep mentioning CORS—it can be something that you have to sometimes come back in and just do it again. So, if for any reason it's not working, come back and enable CORS again. Once you've done that, go and deploy the API again. When you deploy it for a second time, you can actually choose the stage. So, deploy, wait a minute or two—sometimes the deployment of an API stage does take a minute or two to actually take effect. So, that can be an issue as well. And then, lastly, it's useful to use Developer Tools in your browser if you want to troubleshoot what's going on. For example, for this, I prefer to use Google Chrome. And with Google Chrome, you go to View, Developer Tools, and then go to the Network tab. And then, when we submit orders, we can see a bunch of information happening on the right-hand side there. So, for example, if this time I put in "cheese" and quantity 2 and then Submit, we will see things happening. In this case, obviously, everything is good. If you have any issues, then it should show you some sort of red Xs on the right-hand side here, and you can troubleshoot. It might be a CORS error, for example, or it might be a path not found or something like that. So, the Developer Tools—very useful. So, that's it for this lab. Our configuration is fully working now with a static website and an API, and then our serverless application running in the cloud. So, very easy, obviously, to delete all of these resources once you've finished. Everything is within free tier, so it shouldn't cost you anything anyway, but always a good idea to clean things up once you've finished. Hey guys, in this lesson, I'm going to go through Amazon EventBridge. EventBridge is a serverless event bus that allows us to build event-driven applications. EventBridge supports various different event sources. There's lots of integration for AWS services out of the box, so it's very easy to integrate with various different AWS services. But you can also integrate custom applications and SaaS applications as well. So, the event sources generate events—things that are happening to those services. For example, an EC2 instance might be terminated. That would be an event. Then the events get routed through the event bus into the rules engine, where we can configure the rules that we require. And those rules will direct the traffic to various different targets, like Lambda and Kinesis and SQS and others. So, EventBridge helps us to ingest, filter, transform, and deliver events to build loosely coupled applications. Event sources can be, as I mentioned before, various different sources such as AWS services themselves—wide variety of support there for many AWS services. But then we can also utilize custom applications and SaaS applications for our sources, where the events are being generated from. It supports a default event bus, which is there for AWS events, and then you can build custom event buses as well for third-party applications. The events get routed and filtered, and so we have various different parameters that we can configure in the event rules in order to determine how we want our events to be routed and any customizations that we want to make. For scalability and reliability, it automatically scales as a serverless service based on the number of events, and it can handle millions of events per second. So, very highly scalable and very reliable as well. Let's have a look at an example to help you understand how this works. Here we have EC2. We have an event that's been generated. In this case, it's a termination event. Now, this is an example of a very easy rule to create. There is actually a wizard that helps you to do this. You can choose EC2 as the event source, and you can literally specify the EC2 instance terminated event. Then, what we can do is apply those rules and route through to a target. In this case, it could be an SNS notification to let someone know that an instance has been terminated. Let's have a look at another example with EventBridge and AWS CloudTrail. CloudTrail is an auditing service. It logs all of the API actions in our account, so it keeps a record of who did what, at what time, and on what resource. So, it generates events. Those events can then be put through to the event bus. So, in this case, the S3 PutBucketPolicy API was used. That means somebody has applied a bucket policy to an S3 bucket. Maybe we want to know about that, so we create a rule in the event bus, and it's going to pick up these PutBucketPolicy API actions, and then it's going to send them through to a target. In this case, Lambda. So, what we might want to do here is secure the S3 bucket. Maybe we don't want somebody putting on a custom bucket policy. So that, of course, is the logic that we would then have in our Lambda function to secure the S3 bucket. Likewise, we could easily just use an SNS notification again and notify someone that this has happened, or keep a log of it in a DynamoDB table. Lots of power to do whatever we need to do. In this case, the bucket has now been secured. So, those are a couple of examples of working with EventBridge. Hey guys, in this lesson I'll show you how to create a simple Amazon EventBridge event bus that is going to automatically shut down any EC2 instances that aren't the T2 Micro instance type. For this lab, we do need to create a Lambda function. In the Amazon EventBridge folder within the course download, you will find this function code. All we need to do is simply copy this code, and we're going to head to AWS Lambda. I'm going to create a function. It's going to be a Python function, and let's call this stop EC2. It's going to be the Python 3.9 runtime, and then simply create the function. Now, what I want to do is simply delete the function code, add in my code, and then click on deploy. And that code is now deployed. So, what this code will do is it's going to be triggered by Amazon EventBridge, and it's going to check any EC2 instance that has entered the running state—that's what the trigger will cause—and it's going to check if it's a T2 Micro. If it's not a T2 Micro instance type, it's going to stop the instance. Okay, so what we do need to do, of course, is provide permissions. So, on Configuration > Permission, click on the role name here to be taken over to IAM, and we need some permissions for EC2. So, I'm going to attach a policy, type EC2, and then just use the Amazon EC2 Full Access, and then add those permissions. So, that's the function ready. Now we can head over to Amazon EventBridge. So, let's find EventBridge—here it is under Services. In EventBridge, we are going to create a rule. We'll give the rule a name: stop EC2, and leave the rule with an event pattern selected. Click on Next. I'm going to just close down this message here, and we're going to leave the default here: AWS events or EventBridge partner events. Going to go straight past the sample there, come straight the way down to where it says Creation Method. Okay, skip the sample piece. It says Use Pattern Form—we're going to leave that selected. Event source is going to be AWS Services. AWS service is going to be EC2. So, we can type EC2 and then choose the first option, simply EC2. And then we're going to choose the event type. I'm just going to type EC2 instance, so I can just filter a little bit—there's quite a few events there—and we want to use the EC2 Instance State-change Notification. Okay, so now it's starting to create the event pattern in JSON format. Now, we do want to choose a specific state. So, rather than any state, we want to choose a specific state and then select Running. So what this means is every time an instance enters the running state, this rule is going to trigger an action. So, on any instance—by the way, you could choose specific instance IDs if you wanted to—of course, we want this to be a general rule. So now it's filled out the event pattern for me. I can click on Next, choose AWS Service as the target, and then what we're going to do in here is search for Lambda, choose Lambda function, and then choose my stop EC2 function. Notice that you can add additional targets, so you might want a notification as well, for example with SNS. So here, what we'll do is click on Next, click on Next again, and then simply create this rule. So, the rule's created, and it should work. So, let's head over to the EC2 console, and what we're going to do is we're going to launch a couple of instances. The first one will be a T2 Micro, so I'll just call this T2.Micro so that we can recognize it even more easily in the console. And then we're going to leave T2 Micro. I don't need a key pair, I'll just select any existing security group, just so that it doesn't create one for me, and then launch that instance. So, that's my first instance launching. This one should enter the running state, and it should remain in the running state. Let's launch another one, call this the non-T2, and choose any other instance type. Okay, so I'm going to choose a T2 Medium, proceed without a key pair, select a security group—just going to choose one here, I don't really care too much which one it is—and let's try and launch this instance and see what happens. So, once they enter the running state, at that point in time, the function should be triggered. So, the T2 Micro here is certainly running. That's not a problem, it's staying up. If I refresh—yep, seems to be fine. So, let's see what happens when our non-T2 Micro, the T2 Medium, enters the running state. There we go, it's entered the running state, and straight away—I just gave it a refresh—almost instantly, it changed straight into the stopping state. If we come back to our Lambda function, go to Monitor, click on View CloudWatch Logs, and now we can see there's a log stream. Let's click on the log stream, expand the events, and we can see in here quite a few messages. And this one here is informative—it says, "Stopped because it is not a T2 Micro." And if you go back and try and start it, it's just going to keep doing the same thing. So, our T2 Micro is fine, but the non-T2 Micro is not. It's been stopped. So, that's it—basically showing you how to use an EventBridge rule. I'm going to terminate my EC2 instance, and of course, you can go and clean up your EventBridge and your Lambda function as well.