This project presents a secure, serverless application for personal file management. Users can sign up, log in, and manage their documents in the cloud. The system is designed with a tiered access model where authenticated users gain private storage while guests can browse a limited interface. The entire architecture is built on a suite of integrated AWS services, ensuring high availability and scalability without the overhead of traditional servers.
The application's core functionality relies on several decoupled AWS services that communicate through events and permissions. The flow is initiated from a web-based frontend and extends through several backend pipelines for data persistence and real-time notifications.
A conceptual overview of the system's components is illustrated below.
+-------------------+ +----------------------+
| User's Browser | | Amazon Cognito |
+---------+---------+ | (User & Identity Pools) |
| +-----------+----------+
| Identity/Auth |
| |
V V
+-------------------+ +--------------------+
| S3 Static Website | | IAM Roles |
| (Frontend Code) | | (For Auth & Unauth)|
| + CloudFront | +---------+----------+
+-------------------+ |
| |
| File Upload/Delete | S3 Permissions (based on User ID)
V V
+---------------+ +----------------------+
| S3 Bucket | (1) Object Create/Remove | S3 Bucket |
| (Frontend) | ----------------------> | (User Files) |
+---------------+ +---------+----------+
|
| (2) S3 Event Notification
V
+----------------------+
| SQS Queue |
| (S3 Events) + DLQ |
+---------+----------+
|
| (3) SQS Trigger
V
+----------------------+
| Lambda |
| (Process S3 Event) |
+---------+----------+
|
| (4) DynamoDB Write
V
+----------------------+
| DynamoDB |
| (File Metadata) |
+---------+----------+
|
| (5) DynamoDB Stream
V
+----------------------+
| Lambda |
| (Stream Processor) |
+---------+----------+
|
| (6) SNS Publish (Filtered)
V
+----------------------+
| SNS |
| (Notification Topic) |
+----------------------+
|
| (7) Filtered Notification
V
+----------------------+
| User's Email |
+----------------------+
Workflow Breakdown:
- A user uploads a file, which triggers an S3
ObjectCreatedevent. - The S3 bucket's event notification configuration sends a message to an SQS queue.
- An SQS-triggered Lambda function processes the message.
- This Lambda extracts metadata from the S3 event and creates a new record in a DynamoDB table.
- A DynamoDB Stream captures the change and triggers a second Lambda function.
- This Lambda constructs a user-friendly message and publishes it to an SNS topic, applying a filter policy to ensure only the relevant user receives the notification.
- The user, who has subscribed to the topic with the correct filter, receives an email notification.
- Private User Storage: Files are stored in a private S3 bucket, with access restricted to the user's specific
cognito-identity-idprefix via fine-grained IAM policies. - Guest Access: The frontend allows unauthenticated users to view the application UI but restricts them from performing file operations.
- Comprehensive File Management: The application supports file upload, secure pre-signed URL downloads, and "soft deletion" where files are marked as inactive in the database while remaining in S3.
- Robust Event-Driven Backend: S3 events are routed through a dedicated SQS queue to a Lambda processor. This design pattern ensures durability and handles high-volume uploads without data loss. A Dead-Letter Queue (DLQ) is also configured to capture failed messages.
- Real-time User Notifications: Changes to a user's file status (uploads, deletions) trigger a notification pipeline. The system uses a DynamoDB stream to capture database changes and sends targeted email notifications via an SNS topic with a filter policy.
Before setting up the project, ensure you have the following:
- An active AWS account.
- The AWS CLI installed and configured with administrative permissions.
- Basic familiarity with AWS services (S3, Lambda, IAM, Cognito, DynamoDB, SQS, SNS).
Follow these steps to deploy and configure the entire system. Note: all resource names must be globally unique.
- Configure Amazon Cognito:
- Create a User Pool to manage users. Note the
User Pool IDandApp client ID. - Create an Identity Pool to grant AWS credentials. Link it to the User Pool. Note the
Identity Pool ID. - Modify the IAM role created for authenticated identities. The policy for this role must grant
s3:GetObject,s3:PutObject, ands3:DeleteObjectpermissions toarn:aws:s3:::YOUR_CONTENT_BUCKET_NAME/${cognito-identity.amazonaws.com:sub}/*. This policy uses a variable (cognito-identity.amazonaws.com:sub) to enforce user-specific access.
- Create a User Pool to manage users. Note the
- Create S3 Buckets:
- Create a bucket for the frontend website (e.g.,
my-frontend-bucket-123). Enable static website hosting on it and add a public read policy. - Create a separate, private bucket for user file content (e.g.,
my-content-bucket-456). Ensure "Block all public access" is enabled.
- Create a bucket for the frontend website (e.g.,
- Update
script.js:- Modify the constant variables at the top of the file with the IDs and names from your AWS resources (
USER_POOL_ID,CLIENT_ID,IDENTITY_POOL_ID,S3_BUCKET_NAME, etc.).
- Modify the constant variables at the top of the file with the IDs and names from your AWS resources (
- Upload Files:
- Upload your
index.html,script.js, andstyle.cssto the public frontend S3 bucket.
- Upload your
- Enable HTTPS with CloudFront:
- To prevent "insecure download" browser warnings, create an Amazon CloudFront distribution.
- Set the origin to your S3 static website hosting endpoint.
- Configure the viewer protocol policy to "Redirect HTTP to HTTPS".
- Attach a public SSL/TLS certificate from AWS Certificate Manager (ACM) to the distribution.
- Set up DynamoDB:
- Create a table named
filesystem-DB. - Set the Partition Key to
user-id(String) and the Sort Key tofilename(String).
- Create a table named
- Create SQS Queue:
- Create a Standard SQS Queue named
S3FileEventQueue. - Enable a Dead-Letter Queue (DLQ) for it, named
S3FileEventQueueDLQ, to handle failed messages.
- Create a Standard SQS Queue named
- Create S3 Event Processing Lambda:
- Create a new Lambda function (e.g.,
ProcessS3EventsToDynamoDB) using Python. - Grant its IAM role permissions to read from SQS (
sqs:ReceiveMessage,sqs:DeleteMessage) and write to DynamoDB (dynamodb:PutItem). - Configure the Lambda to be triggered by the
S3FileEventQueueSQS queue. - The Lambda's code should parse the S3 event notification from the SQS message and write the user's file metadata (ID, filename, size, creation date) to the DynamoDB table.
- Create a new Lambda function (e.g.,
- Configure S3 Event Notifications:
- In the S3 console, go to your content bucket.
- Add an event notification rule.
- Set the event types to "All object create events".
- Set the destination to your
S3FileEventQueueSQS queue.
- Enable DynamoDB Stream:
- In the DynamoDB console, select your
filesystem-DBtable. - Go to the "Exports and streams" tab and enable the stream with the "New and old images" view.
- In the DynamoDB console, select your
- Create SNS Topic:
- Create a Standard SNS topic named
FileChangeNotificationTopic. Note its ARN.
- Create a Standard SNS topic named
- Create DynamoDB Stream Processor Lambda:
- Create a new Lambda function (e.g.,
ProcessDynamoDBStreamToSNS) using Python. - Grant its IAM role permissions to read from the DynamoDB stream (
dynamodb:GetRecords,dynamodb:GetShardIterator, etc.) and to publish to SNS (sns:Publish). - Configure the Lambda to be triggered by the
filesystem-DBDynamoDB stream. - The Lambda code should check the event type (
INSERT,MODIFY,REMOVE). - For
INSERTevents, it should construct a message about a new file upload. ForMODIFYevents, it should check if thedeletedattribute has changed to generate a "file deleted" message. - The publish call to SNS must include a filter parameter. The
user-idfrom the DynamoDB image should be included in the message attributes, like so:sns.publish( TopicArn=topic_arn, Message=notification_message_text, Subject=notification_subject, MessageAttributes={ "user_id": { "DataType": "String", "StringValue": user_id } } )
- Create a new Lambda function (e.g.,
- Frontend SNS Subscription:
- The frontend
script.jsneeds to be updated to subscribe the user to the SNS topic. - When a user subscribes, the
subscribecall to SNS must include aFilterPolicythat uses the user'scognito-identity-idto filter notifications. This ensures the user only receives messages intended for them. - The subscription's filter policy should look like this:
{ "user_id": [ "YOUR_COGNITO_IDENTITY_ID_FOR_THIS_USER" ] } - The frontend
- Access the application via your CloudFront HTTPS URL.
- Register a new user account.
- Log in and start uploading files.
- Observe the file list and try downloading or deleting files.
- Click the "Subscribe to Events" button and check your email for a confirmation request. After confirming, you will receive notifications for all your future file changes.
- CORS Errors: Ensure your API Gateway and S3 bucket CORS policies are correctly configured.
- 403 Forbidden on Upload/Delete: Double-check the IAM role for your authenticated users. The S3 policy's
ResourceARN must be correctly formatted to include the${cognito-identity.amazonaws.com:sub}variable. - No SNS Notifications:
- Verify your Lambda's CloudWatch logs to see if the function executed successfully and made the
sns.publishcall. - Check if the SNS subscription for your email is confirmed.
- Confirm that the SNS publish call includes
MessageAttributeswithuser_idas a string. - Check the subscription filter policy to ensure it matches the
user_idfrom the published message attributes.
- Verify your Lambda's CloudWatch logs to see if the function executed successfully and made the