upload file to s3 using lambda node js

In web and mobile applications, information technology's common to provide users with the power to upload data. Your application may allow users to upload PDFs and documents, or media such as photos or videos. Every modernistic spider web server applied science has mechanisms to permit this functionality. Typically, in the server-based environment, the process follows this flow:

Application server upload process

  1. The user uploads the file to the awarding server.
  2. The awarding server saves the upload to a temporary space for processing.
  3. The application transfers the file to a database, file server, or object store for persistent storage.

While the process is simple, it can accept significant side-furnishings on the performance of the spider web-server in busier applications. Media uploads are typically large, so transferring these can correspond a large share of network I/O and server CPU time. You lot must also manage the state of the transfer to ensure that the entire object is successfully uploaded, and manage retries and errors.

This is challenging for applications with spiky traffic patterns. For case, in a web awarding that specializes in sending holiday greetings, it may experience most traffic only around holidays. If thousands of users attempt to upload media around the aforementioned fourth dimension, this requires you lot to scale out the awarding server and ensure that there is sufficient network bandwidth available.

By directly uploading these files to Amazon S3, yous can avert proxying these requests through your application server. This can significantly reduce network traffic and server CPU usage, and enable your application server to handle other requests during busy periods. S3 besides is highly bachelor and durable, making it an ideal persistent store for user uploads.

In this blog post, I walk through how to implement serverless uploads and show the benefits of this arroyo. This pattern is used in the Happy Path web awarding. You can download the code from this web log mail in this GitHub repo.

Overview of serverless uploading to S3

When yous upload directly to an S3 bucket, you must first request a signed URL from the Amazon S3 service. You tin and then upload directly using the signed URL. This is two-footstep process for your awarding front end end:

Serverless uploading to S3

  1. Phone call an Amazon API Gateway endpoint, which invokes the getSignedURL Lambda function. This gets a signed URL from the S3 bucket.
  2. Direct upload the file from the application to the S3 saucepan.

To deploy the S3 uploader case in your AWS business relationship:

  1. Navigate to the S3 uploader repo and install the prerequisites listed in the README.medico.
  2. In a terminal window, run:
    git clone https://github.com/aws-samples/amazon-s3-presigned-urls-aws-sam
    cd amazon-s3-presigned-urls-aws-sam
    sam deploy --guided
  3. At the prompts, enter s3uploader for Stack Name and select your preferred Region. Once the deployment is complete, note the APIendpoint output.The API endpoint value is the base of operations URL. The upload URL is the API endpoint with /uploads appended. For case: https://ab123345677.execute-api.u.s.-west-2.amazonaws.com/uploads.

CloudFormation stack outputs

Testing the application

I show ii ways to test this awarding. The start is with Postman, which allows you to direct call the API and upload a binary file with the signed URL. The second is with a basic frontend application that demonstrates how to integrate the API.

To test using Postman:

  1. First, copy the API endpoint from the output of the deployment.
  2. In the Postman interface, paste the API endpoint into the box labeled Enter request URL.
  3. Choose Send.Postman test
  4. After the asking is consummate, the Body section shows a JSON response. The uploadURL attribute contains the signed URL. Copy this attribute to the clipboard.
  5. Select the + icon adjacent to the tabs to create a new request.
  6. Using the dropdown, alter the method from GET to PUT. Paste the URL into the Enter request URL box.
  7. Choose the Body tab, so the binary radio button.Select the binary radio button in Postman
  8. Cull Select file and choose a JPG file to upload.
    Choose Send. You meet a 200 OK response after the file is uploaded.200 response code in Postman
  9. Navigate to the S3 panel, and open the S3 bucket created by the deployment. In the saucepan, you see the JPG file uploaded via Postman.Uploaded object in S3 bucket

To test with the sample frontend application:

  1. Copy alphabetize.html from the case'due south repo to an S3 bucket.
  2. Update the object's permissions to make it publicly readable.
  3. In a browser, navigate to the public URL of alphabetize.html file.Frontend testing app at index.html
  4. Select Cull file and then select a JPG file to upload in the file picker. Choose Upload image. When the upload completes, a confirmation message is displayed.Upload in the test app
  5. Navigate to the S3 console, and open the S3 bucket created by the deployment. In the bucket, you lot come across the 2nd JPG file you uploaded from the browser.Second uploaded file in S3 bucket

Agreement the S3 uploading process

When uploading objects to S3 from a web awarding, yous must configure S3 for Cross-Origin Resource Sharing (CORS). CORS rules are defined equally an XML document on the saucepan. Using AWS SAM, you tin can configure CORS every bit role of the resource definition in the AWS SAM template:

                      S3UploadBucket:     Type: AWS::S3::Bucket     Properties:       CorsConfiguration:         CorsRules:         - AllowedHeaders:             - "*"           AllowedMethods:             - Go             - PUT             - Head           AllowedOrigins:             - "*"                  

The preceding policy allows all headers and origins – it'southward recommended that you use a more than restrictive policy for product workloads.

In the starting time step of the process, the API endpoint invokes the Lambda function to make the signed URL request. The Lambda function contains the post-obit code:

          const AWS = crave('aws-sdk') AWS.config.update({ region: process.env.AWS_REGION }) const s3 = new AWS.S3() const URL_EXPIRATION_SECONDS = 300  // Main Lambda entry signal exports.handler = async (event) => {   return await getUploadURL(result) }  const getUploadURL = async function(consequence) {   const randomID = parseInt(Math.random() * 10000000)   const Key = `${randomID}.jpg`    // Become signed URL from S3   const s3Params = {     Bucket: procedure.env.UploadBucket,     Fundamental,     Expires: URL_EXPIRATION_SECONDS,     ContentType: 'image/jpeg'   }   const uploadURL = look s3.getSignedUrlPromise('putObject', s3Params)   return JSON.stringify({     uploadURL: uploadURL,     Primal   }) }                  

This function determines the name, or key, of the uploaded object, using a random number. The s3Params object defines the accepted content type and besides specifies the expiration of the cardinal. In this case, the key is valid for 300 seconds. The signed URL is returned equally part of a JSON object including the fundamental for the calling application.

The signed URL contains a security token with permissions to upload this unmarried object to this bucket. To successfully generate this token, the lawmaking calling getSignedUrlPromise must take s3:putObject permissions for the bucket. This Lambda office is granted the S3WritePolicy policy to the bucket by the AWS SAM template.

The uploaded object must lucifer the same file proper name and content type every bit defined in the parameters. An object matching the parameters may exist uploaded multiple times, providing that the upload procedure starts before the token expires. The default expiration is xv minutes simply y'all may desire to specify shorter expirations depending upon your use case.

One time the frontend awarding receives the API endpoint response, information technology has the signed URL. The frontend awarding then uses the PUT method to upload binary data directly to the signed URL:

          let blobData = new Blob([new Uint8Array(assortment)], {blazon: 'image/jpeg'}) const result = await fetch(signedURL, {   method: 'PUT',   body: blobData })                  

At this indicate, the caller application is interacting direct with the S3 service and not with your API endpoint or Lambda function. S3 returns a 200 HTML status code one time the upload is complete.

For applications expecting a large number of user uploads, this provides a simple way to offload a big amount of network traffic to S3, abroad from your backend infrastructure.

Adding hallmark to the upload process

The current API endpoint is open, available to whatsoever service on the internet. This means that anyone can upload a JPG file once they receive the signed URL. In about production systems, developers want to apply hallmark to control who has access to the API, and who tin upload files to your S3 buckets.

You can restrict admission to this API by using an authorizer. This sample uses HTTP APIs, which back up JWT authorizers. This allows you to control admission to the API via an identity provider, which could be a service such as Amazon Cognito or Auth0.

The Happy Path application only allows signed-in users to upload files, using Auth0 as the identity provider. The sample repo contains a 2d AWS SAM template, templateWithAuth.yaml, which shows how you can add an authorizer to the API:

                      MyApi:     Type: AWS::Serverless::HttpApi     Properties:       Auth:         Authorizers:           MyAuthorizer:             JwtConfiguration:               issuer: !Ref Auth0issuer               audience:                 - https://auth0-jwt-authorizer             IdentitySource: "$request.header.Potency"         DefaultAuthorizer: MyAuthorizer                  

Both the issuer and audience attributes are provided by the Auth0 configuration. By specifying this authorizer as the default authorizer, it is used automatically for all routes using this API. Read part one of the Inquire Around Me series to learn more about configuring Auth0 and authorizers with HTTP APIs.

Subsequently authentication is added, the calling spider web application provides a JWT token in the headers of the request:

          const response = await axios.get(API_ENDPOINT_URL, {   headers: {     Authorization: `Bearer ${token}`         } })                  

API Gateway evaluates this token earlier invoking the getUploadURL Lambda function. This ensures that only authenticated users tin upload objects to the S3 saucepan.

Modifying ACLs and creating publicly readable objects

In the current implementation, the uploaded object is non publicly attainable. To brand an uploaded object publicly readable, you must set its access control listing (ACL). At that place are preconfigured ACLs available in S3, including a public-read option, which makes an object readable past anyone on the net. Fix the appropriate ACL in the params object before calling s3.getSignedUrl:

          const s3Params = {   Saucepan: procedure.env.UploadBucket,   Key,   Expires: URL_EXPIRATION_SECONDS,   ContentType: 'prototype/jpeg',   ACL: 'public-read' }                  

Since the Lambda role must accept the advisable bucket permissions to sign the request, yous must also ensure that the office has PutObjectAcl permission. In AWS SAM, you tin add the permission to the Lambda function with this policy:

                      - Statement:           - Effect: Permit             Resources: !Sub 'arn:aws:s3:::${S3UploadBucket}/'             Activity:               - s3:putObjectAcl                  

Conclusion

Many web and mobile applications allow users to upload data, including large media files like images and videos. In a traditional server-based awarding, this can create heavy load on the application server, and too utilize a considerable amount of network bandwidth.

By enabling users to upload files to Amazon S3, this serverless design moves the network load away from your service. This can brand your application much more scalable, and capable of handling spiky traffic.

This blog mail service walks through a sample application repo and explains the procedure for retrieving a signed URL from S3. It explains how to the test the URLs in both Postman and in a web application. Finally, I explicate how to add together hallmark and brand uploaded objects publicly accessible.

To learn more, see this video walkthrough that shows how to upload directly to S3 from a frontend web application. For more serverless learning resources, visit https://serverlessland.com.

reavesextouralke.blogspot.com

Source: https://aws.amazon.com/blogs/compute/uploading-to-amazon-s3-directly-from-a-web-or-mobile-application/

0 Response to "upload file to s3 using lambda node js"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel