New – Use Amazon S3 Event Notifications with Amazon EventBridge

We launched Amazon EventBridge in mid-2019 to make it easy for you to build powerful, event-driven applications on any scale. Since that launch, we’ve added several key features, including a schedule registry, the ability to archive and replay events, support for interregional event bus destinations, and API destinations, so you can send events to any HTTP API. With the support of a very long list of destinations and the ability to do pattern matching, filtering and routing of events, EventBridge is an incredibly powerful and flexible architectural component.

S3 Event notifications
Today, we make it even easier for you to use EventBridge to build applications that respond quickly and efficiently to changes in your S3 objects. This is a new, “direct wired” model that is faster, more reliable and more developer friendly than ever. You no longer need to make additional copies of your objects or write specialized code for a single purpose to process events.

At this point, you might be thinking that you already had the ability to respond to changes in your S3 objects, and wonder what’s going on here. Back in 2014, we launched S3 event notifications for SNS topics, SQS queues, and Lambda features. This was (and still is) a very powerful feature, but using it on a corporate scale may require coordination between otherwise independent teams and applications that share an interest in the same objects and events. EventBridge can also already extract S3 API calls from CloudTrail logs and use them to perform pattern matching and filtering. Again, very powerful and great for many kinds of apps (with a focus on auditing & logging), but we will always do it even better.

Net-net, you can now configure S3 Event Notifications to deliver directly to EventBridge! This new model gives you several benefits, including:

Advanced filtering – You can filter on many additional metadata fields, including object size, key name and time interval. This is more efficient than using Lambda features that need to make calls back to the S3 to get additional metadata in order to make decisions about the correct approach. S3 only publishes events that match a rule, so you save money by only paying for events that are of interest to you.

Multiple destinations – You can route the same event message to your choice of 18 AWS services, including Step Functions, Kinesis Firehose, Kinesis Data Streams, and HTTP destinations via API destinations. This is much easier than setting up your own fan-out mechanism and it will also help you deal with the company-scale situations where independent teams want to carry out their own event management.

Fast, reliable call Patterns are matched (and goals are developed) quickly and directly. Because the S3 delivers event delivery at least once to EventBridge, your applications will be more reliable.

You can also take advantage of other EventBridge features, including the ability to archive and then play events. This allows you to reprocess events in case of an error or if you add a new target to an event bus.

Get started
I can get started in minutes. I start by activating EventBridge notifications on one of my S3 buckets (jbarr-public In this case). I open the S3 console, find my bucket, open it Properties tab, scroll down to Event announcements, and click Edit:

I choose On, click Save Changes, and I’m ready to roll:

Now I use the EventBridge console to create a rule. I start as usual by entering a name and a description:

Then I define a pattern that matches the bucket and the events of interest:

A pattern can match one or more buckets and one or more events; the following events are supported:

  • Object Created
  • Object Deleted
  • Object Restore Initiated
  • Object Restore Completed
  • Object Restore Expired
  • Object Tags Added
  • Object Tags Deleted
  • Object ACL Updated
  • Object Storage Class Changed
  • Object Access Tier Changed

Then I select the default event bus and set the target to an SNS subject (BucketAction) who publishes the messages to my Amazon email address:

I click cabinet, and I’m ready. To test it, I just upload some files to my bucket and await the messages:

The message contains all the interesting and relevant information about the event and (after some removal of quotes and formatting) it looks like this:

{
    "version": "0",
    "id": "2d4eba74-fd51-3966-4bfa-b013c9da8ff1",
    "detail-type": "Object Created",
    "source": "aws.s3",
    "account": "348414629041",
    "time": "2021-11-13T00:00:59Z",
    "region": "us-east-1",
    "resources": [
        "arn:aws:s3:::jbarr-public"
    ],
    "detail": {
        "version": "0",
        "bucket": {
            "name": "jbarr-public"
        },
        "object": {
            "key": "eb_create_rule_mid_1.png",
            "size": 99797,
            "etag": "7a72374e1238761aca7778318b363232",
            "version-id": "a7diKodKIlW3mHIvhGvVphz5N_ZcL3RG",
            "sequencer": "00618F003B7286F496"
        },
        "request-id": "4Z2S00BKW2P1AQK8",
        "requester": "348414629041",
        "source-ip-address": "72.21.198.68",
        "reason": "PutObject"
    }

My first event pattern was very simple and only matched the bucket name. I can use content-based filtering to write more complex and interesting patterns. For example, I could use numeric matching to set up a pattern that matches events for objects smaller than 1 megabyte:

{
    "source": [
        "aws.s3"
    ],
    "detail-type": [
        "Object Created",
        "Object Deleted",
        "Object Tags Added",
        "Object Tags Deleted"
    ],

    "detail": {
        "bucket": {
            "name": [
                "jbarr-public"
            ]
        },
        "object" : {
            "size": [{"numeric" :["<=", 1048576 ] }]
        }
    }
}

Or I could use prefix matching to set up a pattern looking for objects uploaded to a “subfolder” (which does not really exist) in a bucket:

"object": {
  "key" : [{"prefix" : "uploads/"}]
  }]
}

You can use all of this with all the existing EventBridge features, including File / Replay. You can also access CloudWatch metrics for each of your rules:

Available now
This feature is available now and you can start using it today in all commercial AWS regions. You pay $ 1 for every 1 million events that match a rule; check the EventBridge price page for more information.

Jeff;

William

Leave a Reply

Your email address will not be published. Required fields are marked *