Sharing my learnings....

Category: Backend

Deploying Node JS Application with AWS App Runner

AWS App Runner is a fully managed service provided by Amazon Web Services (AWS). It makes it easy for developers to quickly deploy containerized web applications and APIs, at scale and with no prior infrastructure experience required. It automates management tasks such as provisioning, scaling, and security, enabling developers to focus on writing code rather than managing infrastructure.

How It’s Useful:

  1. Simplicity and Speed: It simplifies the deployment process by abstracting away the underlying infrastructure. Developers can deploy their applications quickly by providing the source code or a container image.
  2. Scalability: App Runner automatically scales the application up or down based on traffic, ensuring that the application can handle peaks in demand without manual intervention.
  3. Integrated with AWS ecosystem: It seamlessly integrates with other AWS services, providing a comprehensive and secure environment for deploying applications.
  4. Cost-Effective: Pay only for the resources used without having to manage underlying servers, potentially reducing the cost of running web applications and APIs.
  5. No infrastructure management: It eliminates the need to provision, configure, or manage servers, load balancers, and scaling mechanisms.

Comparison with Other Providers:

  • DigitalOcean App Platform: Like AWS App Runner, DigitalOcean’s App Platform is a platform-as-a-service (PaaS) offering that allows developers to build, deploy, and scale apps quickly. While App Runner focuses on the AWS ecosystem, DigitalOcean’s App Platform provides a simpler, more straightforward pricing model and is often appreciated for its user-friendly interface and lower entry cost.
  • Heroku: Heroku is another popular PaaS that simplifies the deployment process. It’s known for its ease of use and was one of the pioneers in platform as a service. Heroku supports multiple programming languages and has a strong add-on ecosystem. Compared to AWS App Runner, Heroku may offer more flexibility regarding language and framework support but might not provide the same level of integration with AWS services or the same scale of infrastructure.
  • Google Cloud Run: This is Google’s fully managed compute platform that automatically scales your stateless containers. Google Cloud Run is similar to AWS App Runner in that it abstracts away the infrastructure, allowing developers to focus on their applications. Cloud Run is deeply integrated with Google Cloud services and offers both a fully managed environment and an Anthos-based environment for more control.

Deploying Apps Using AWS App Runner:

  1. Preparation:
    • Have a Docker container image ready in a registry (Amazon ECR or any public registry) or source code hosted in a supported repository (e.g., GitHub).
    • Ensure your application listens for HTTP requests on the port defined by the PORT environment variable.
  2. Creating a Service:
    • Go to the AWS App Runner console and choose to create a new service.
    • Select the source of your application (repository or container registry) and configure the settings (e.g., deployment method, instance configuration, and environment variables).
    • Configure the service settings, including the automatic deployment of source changes if desired.
  3. Deployment:
    • After configuring the service, deploy your application. App Runner will build and deploy the application, handling all the infrastructure, including networking, security, and scaling.
  4. Monitoring and Management:
    • Once deployed, you can monitor the application’s health and performance directly from the AWS console.
    • App Runner provides automatic scaling, but you can adjust settings and configurations as needed.

Demo of Deploying a Node JS Application using App Runner

Go to the App Runner Service Page in AWS Console -> click “Create an App Runner Service”

We will be shown up with 2 types of deployment methods.

  1. Docker Images: Using Amazon ECR (Elastic Container Registry), can use both Private or Public Images
  2. Source Code Repository (For this we must need to select the provider among Github & Bitbucket)

In this article, I will go with Github, (Select Source code repository -> Provider ->> Github)

For the Github connection, if we are new then we need to add our Github connection to App Runner to access the repositories from our account.

After providing the Authorization to the “AWS Connector” app, in the window it will ask to name the connection -> The name can be anything to distinguish easily from the GitHub account we authorized.

And then, we need to install the AWS Connector app to authorize the repositories.

After configuring the authorization part, we can see the list of the repositories we allowed the AWS Connector to access.

Then, we will select the repository we need for deployment and the branch that we need to get deployed.

We can configure the Source Directory, where if we are given any folder path, then the build and start commands that we will provide in the next setup will be executed in this directory. By default, the source directory is the root directory.

In the Deployment settings option, we can either select Manual or Automatic deployment. Where Automatic deployment will provide the option to trigger and deploy the app whenever the specific branch gets updated.

In the Next screen we will be shown to configure the Build settings:

Currently, we can deploy

  • Coretto
  • .NET
  • Go
  • NodeJS
  • Python
  • Php
  • Ruby


So, let’s go with Node JS.

If we have any specific configuration and have customized it from the repository level,

then we can select the ‘Use a configuration file’ option, which will read the ‘apprunner. yaml‘ file for configuration and do the build settings.

I will go with setting up everything right here,

so for the build command:

npm install

Start command:

node index.js

On the next page, we will be shown up to configure the service settings:

According to our application needs, we can customize the CPU and Virtual memory allocation along with setting up the Environment Variables.

We can set up the IAM policy, and Autoscaling configurations on this page. For now, let’s keep everything with default values and set the service first.

The next page will show all the configurations for review and if we are good with it let’s click “Create & deploy”

On the next page, we will be redirected to the app runner service detailed page. Where we can see a message that the app is currently being deployed. We can also see the logs of the app as well.

The following image shows what the app runner properties will look like. We will get a default domain or we can customize it with our domain as well.

Now, whenever we make changes to the branch that we configured to deploy, the deployment will triggered. That’s the beauty of the App Runner.

AWS App Runner is particularly useful for developers looking to deploy applications quickly without the hassle of managing infrastructure. It stands out for its integration with the AWS ecosystem and its ability to scale applications automatically. When compared to other services like DigitalOcean’s App Platform, Heroku, and Google Cloud Run, the choice often comes down to the specific needs of the project, cost considerations, and preferred cloud ecosystem. Each platform has its strengths, and the best choice depends on the requirements of the application you’re deploying.

Send SMS in NodeJS using Nexmo

Hi, As I’m a JS enthusiast; I love doing JavaScript development due to its easiness & lot of people there to help.

Today, I have come up to show you how to implement, message sending facility using NodeJS using Nexmo service provider.

A lot of people are interested in start-ups & need to grow their business using SMS promotions.

Let’s move to the coding portion.

First of all, you have to create Nexmo account: Click Here

Now, create a new folder with a name you want

Create package.json using

npm init


We have to install express & body-parser to the application to use.

npm install express body-parser — save

Then, create an index.js file in the root folder

const express = require('express');
//importing express to useconst

bodyParser = require('body-parser');
//importing body parser to get the body input const

app = express();
app.use(bodyParser.urlencoded({ extended: true }));


//requiring app Which is an express instance to use in controller file

const server = app.listen(3000);
//configure the server to run on port 3000

console.log("Server working on 3000")

Now, create the controller.js file in the root folder.

module.exports = function (app) {
	const Nexmo = require('nexmo');
	const nexmo = new Nexmo({
	        apiKey: Your_API_KEY,
	        apiSecret: Your_API_SECRET_KEY

	const config = {
	}'/send', (req, res) => {
	//Setting endpoint of /send
	// Send SMS
	     req.body.message, { type: 'unicode'},
     (err, responseData) => { if (responseData)

Here we are using the app instance which we declared in the index.js.

Then, we are importing the Nexmo module & create an instance of the Nexmo module;

for apiKey & apiSecret you have to give your Nexmo account details.

In the config object, you have to put the registered mobile number for your Nexmo account as a value for the number attribute.

And, after that, a POST request with a “/send” endpoint.

Wrapping the nexmo.message.sendSms() method which is predefined in the Nexmo to send message inside the POST request.

Finally, we are console the message details in the terminal/command line.

Now, Start the server using

node index.js

And test it with Postman.

If you get Non White-listed Destination — rejected error.

You have to register it in your Nexmo Account.

To do that Click Here

Yes, you have done it.

Happy Coding Folks..!!

Annotation & their uses in Java

Hello, I’m newbie to Java world & so new to Spring Boot. So, I don’t have a prior experience with spring also. As a new one, I’m writing this to new people to Java. Experts, correct me in the comments if its wrong.

What is Annotation..?

a note by way of explanation or comment added to a text or diagram.

Above, is the dictionary explanation for Annotation.

In Java

Annotation is a tag that used for

  1. methods
  2. class
  3. interface

which adds more information about those for Java compiler / JVM(Java Virtual Machine).

You may have already experienced with some built-in Java function when you done with basic Java stuffs.


  1. @override — used to override the parent class method in sub class method
  2. @SuppressWarnings — used to suppress warnings issued by the compiler.
  3. @Deprecated — compiler prints warning because of the method is deprecated as it could be removed in the future versions. So that its better not to use such method.

Let’s have a look at Java Custom Annotation

Java Custom Annotation or Java User defined annotations are easy to use and create (That means even you and me can create an annotation according to our need).

@interface element is used to declare an annotation.

Like this

@interface parathan{}

to create annotation; annotation should have the following characters

  • method should not have parameters
  • should not throw any clause
  • may have default values
  • should return a class, method, enum, String like primitive data type

Type of Annotation

  1. Marker Annotation
  2. Single Value Annotation
  3. Multi-value Annotation
  4. Marker Annotation — Annotation that have no methods inside it.

Eg: @Deprecated @Override

@interface parathan{}

  1. Single-value Annotation — Annotation that have one method

@interface parathan{

int value();


Default value can be provided for it by following code snippet

@interface parathan{

int value() default 0;


Applying a Single Annotation in code


3.Multi-value Annotation — Annotation that has more method than one

@interface parathan{

int age();

String name();

String country();


Applying Multi Annotation can be as follows

@parathan(age=20,name=”Parathan Thiyagalingam”,country=”Sri Lanka”)

Built in Annotations used in Custom Annotations

  1. @Target
  2. @Retention
  3. @Inherited
  4. @Documented

1. @Target —

used to betoken to which type the annotation to be used

For that we have to import java.lang.annotation.ElementType

we use


Here some_thing need to be replaced by the following keywords if you are using where the annotation need to be applied.

If you are going to use it for “class,interface,enumeration”



If you are going to use it for methods




Eg to use for class and method is follow


@interface parathan{

int age();

String name();


2. @Retention

Used to betoken for what level the annotation need to be available.

There are 3 level

  • SOURCE — this refers the source code, which the annotation will not be available in compiled class.
  • CLASS — this refers to the .class file, which the annotation will not available to JVM but available for java compiler. So, it will be in the class file.
  • RUNTIME — refers runtime, which is available to bothe java compiler & jvm.




@interface parathan{

int age();

String name();


The above snippet represents the annotation will be available at the run time & the annotation is targeted to the class.

3. @Inherited :

Normally the annotations are not inherited to sub classes to inherit the annotation to sub class @Inherited is used.


@interface parathan{ } //Now it will be available to subclass also

@interface parathan{ }

class MainClass{}

class Subclass extends MainClass{} //sub class extends the super class

As the annotation “parathan” is inherited;

the MainClass uses the annotation as the SubClass extends the MainClass,

therefore, the SubClass can also access the annotation


It is used to include the annotation in the Java documentation.

The annotation I’m writing here, is because I’m posting REST API in Java using Spring Boot & MySQL as Parts.

So, there we are using a lot of annotations to make our tasks easy.

If I made any mistakes.. Please comment below. Or share with your friends.

Happy Coding Folks…!!

Split & Map a String in Java

Hi, everyone. I’m learning the basic concepts of Java since last month. So, while I’m learning I did some example by my own. In future post I’m planning to publish those code with you about what I learnt & how I did it.

Here, I want to show you how I split & mapped a string which in the following format.


This is my code

import java.util.*;

public class App {

    public static void main (String args[]) {

        Map<String, String> map = new HashMap<String, String>();
        String gotData = "key1=value1;key2=value2;key3=value3";

        if(gotData == **""**){
        	map = null;
	} else {

   		for(String keyValue : gotData.split(";")) {
        		String[] key = keyValue.split("=");
        		map.put(key[0], key[1]);
        Map<String,String> finalOutput= map;



Could you get it out…??

I’m explain it according to my knowledge…

I imported all java.util package modules to use “Map, Hash Map, put” keyword which are belong to the util package.

Then in the App class I declared a map of datatype Map.

For loop is used to loop until the string fully read by the program.

Inside that if else is used to avoid the “ArrayBoundExcaption” which is very common exception when using Map, Arrays in Java.

Splitting the string by “ ; ” and “ = ” and assigning the key to the o th index of Map & value to the 1st index of Map

This is what I got as the output

{key1=value1, key2=value2, key3=value3}

Screenshot of What I got as Output using Debugger mode

for o (zero(th Index

value1 assigned to key1

value2 assigned to key2

value3 assigned to key3

If you have another simple code Share it in the comment.


Simple CRUD APP with Dynamo DB – API Gateway – Node JS Serverless with Lambda

Imagine you have a big box of Legos. With these Legos, you can build lots of different things, like houses, cars, and even spaceships. But to build these things, you need a place to keep your Legos and enough pieces to make whatever you want.

Now, think of Amazon Web Services as a huge, virtual box of Legos that lives on the internet. Instead of building houses or cars, people use AWS to build websites, store information, and do lots of other important computer work.

Just like you can share your Legos with friends, people can use AWS to share their creations with others over the internet. They can build big websites, store lots of pictures and videos, or even make new apps for phones and tablets.

The cool part is that you don’t need to have all the computer pieces at your home. AWS has a super big computer system that anyone can use over the internet. So, it’s like having a never-ending supply of Legos that you can use anytime you want to build something new and exciting!

AWS provides different services. Following are some of the example for the services provided by AWS:

  1. EC2 (Elastic Compute Cloud): Think of EC2 like renting a powerful computer on the internet. You can use this computer to run websites, applications, or any software you need. It’s like having a really good computer that you can use for a while, without actually buying it.
  2. S3 (Simple Storage Service): This is like a huge online storage locker. You can put all sorts of things in it like photos, videos, and documents. It’s always there when you need it, and you can store as much as you want and get it back whenever you need.
  3. RDS (Relational Database Service): Imagine a big, super-organized filing cabinet for all your data. RDS lets you store and manage lots of information (like customer details or product information) in a very organized way. It’s like having a librarian who keeps all your data neat and tidy.
  4. Lambda: This is a bit like having a magic genie. You tell Lambda a small task you want done, like resizing a photo when you upload it, and Lambda does it for you automatically. You don’t need to worry about how it gets done; Lambda takes care of it.
  5. CloudFront: Think of CloudFront as a super-fast delivery truck for your website. It makes sure that when people visit your website, they get everything they need (like pictures and videos) really quickly, no matter where in the world they are.
  6. IAM (Identity and Access Management): IAM is like the security guard for your AWS services. It helps you manage who is allowed to do what. Like, who can open your storage locker, or who can use your rented computer. It makes sure that only the people you trust can use your AWS stuff.

These are just a few of the many services AWS offers, but they’re some of the most popular and widely used. Each service is designed to make specific tasks easier and more efficient for businesses and developers.

In this article, I am going to build a Simple CRUD application using the AWS Services such as Dynamo DB, API Gateway & Lambda along with IAM Role.

Imagine you have a small library of children’s storybooks. You want to create a simple system where kids can check online to see if their favorite book is available. Here’s how the AWS services fit in:

  1. DynamoDB: This is like your digital catalog of books. Each book in your library is listed here with details like the title, author, and whether it’s currently available or borrowed.
  2. Lambda: Think of Lambda as your helpful librarian. When a kid wants to know if a book is available, Lambda checks the DynamoDB catalog for them. It looks up the book and tells them if it’s there or not.
  3. API Gateway: This is the library’s front desk. Kids come here (through a website or app) and ask, “Is ‘Where the Wild Things Are’ available right now?” The API Gateway takes this question and passes it to Lambda, the librarian, to get the answer.
  4. IAM Role: The IAM Role is like a set of rules for who can do what in the library. It makes sure that only the librarian (Lambda) can access the book catalog (DynamoDB) and that only the front desk (API Gateway) can talk to the librarian. This keeps everything secure and running smoothly.

So, when a kid asks if a book is available, their question goes to the API Gateway, which then asks Lambda. Lambda checks the DynamoDB catalog and then tells the kid if the book is available or not, all in just a few seconds. This makes it really easy for kids to find out if they can borrow their favorite storybook!

Now, let’s take this as a example and build a small CRUD Application.

Let’s first start with creating a Dynamo DB table.

Go to AWS Console and Search for Dynamo DB service,

Click on Tables in the Sidebar -> Click “Create table

Under Table Name: type whatever table name you want to have

-> children_library

For Partition Key, it should be the Primary key of the table and it should be unique as well.

-> book_id

Keep other things as default values and click the Create table button.

This will take some time to create the DB.

Now, let’s move to creating a Lambda function, which will be act as a backend based on the API request it will retrieve or write data to our created Dynamo DB.

Search and click on “Lambda” service:

Click on “Functions” in the sidebar -> click “Create function

Select -> Author from Scratch

Function name -> serverless-api

Runtime -> Node JS 16.x

In the Change default execution role

Check -> “Create a new role from AWS policy templates

Role name -> type any meaning full name. -> serverless-api-role

Keep others as default and click “Create function

This will take up a few seconds to create the function

Since we created a role for this, Let’s go to the IAM service and Assign Role Permission to access Dynamo DB & Cloud Watch Logs (Will do a seperate post on Cloud Watch later) and click “Roles” in sidebar.

Click on the role that you assigned for Lambda,

In the Permissions tab -> Permissions policies -> click “Add permissions” -> Attach policies ->

Search and apply the following Policies

  1. AmazonDynamoDBFullAccess
  2. CloudWatchLogsFullAccess

Why to add these policies to this role:

  1. AmazonDynamoDBFullAccess: This policy provides full access to Amazon DynamoDB. When you assign this policy to a Lambda function, it grants the function permissions to perform all actions on DynamoDB. This includes creating, reading, updating, and deleting data in DynamoDB tables. This is crucial if your Lambda function needs to interact with a DynamoDB database, such as retrieving data for processing, storing results, or maintaining a database.
  2. CloudWatchLogsFullAccess: This policy grants full access to Amazon CloudWatch Logs. CloudWatch is a monitoring and observability service. By assigning this policy to a Lambda function, you enable the function to create and manage log groups and log streams in Amazon CloudWatch Logs. This is essential for logging the operation of your Lambda function, which helps in monitoring its performance, debugging issues, and keeping records of its activity. Effective logging is a key aspect of understanding and maintaining the health of applications running on AWS.

Now lets create API Endpoints in API Gateway.

Search for “API Gateway“,

Click on APIs in sidebar:

Click “Create API

A new page will be shown up with “Choose an API type“:

Following types are available at the time of writing this article:

  2. WebSocket API
  4. REST API Private

Let’s walk through a small level of understanding about these type of API:

  1. HTTP API:
    • Purpose: HTTP APIs in AWS are primarily used for building scalable and cost-effective RESTful APIs.
    • Key Features:
      • Designed for low-latency, cost-effective integrations.
      • Support for OAuth 2.0 and JWT (JSON Web Token) authorization.
      • Ideal for serverless workloads using AWS Lambda.
      • Simplified routing and payload formatting.
    • Use Case: Best suited for simple, HTTP-based backend services where routing, authorization, and scalability are important.
  2. WebSocket API:
    • Purpose: WebSocket APIs are used to create real-time, two-way communication applications.
    • Key Features:
      • Enables server to push real-time updates to clients.
      • Maintains a persistent connection between client and server.
      • Supports messaging or chat applications, real-time notifications, and more.
    • Use Case: Ideal for applications that require real-time data updates, such as chat applications, real-time gaming, or live notifications.
  3. REST API:
    • Purpose: REST APIs (Representational State Transfer) in AWS are for creating fully-featured RESTful APIs. They offer more features compared to HTTP APIs
    • Key Features:
      • Supports RESTful interface and can work with JSON, XML, or other formats.
      • Advanced features like request validation, request and response transformations, and more.
      • Detailed monitoring and logging capabilities.
    • Use Case: Suitable for APIs that require complex API management, extensive integration capabilities, and legacy system support.
  4. REST API Private:
    • Purpose: Private REST APIs are designed to provide RESTful API features accessible only within your Amazon Virtual Private Cloud (VPC).
    • Key Features:
      • Restricted access: Only available to applications within your VPC or those using a VPN to your VPC.
      • Integration with AWS services within a VPC without exposing them to the public internet.
      • All features of REST APIs but with private network access.
    • Use Case: Best for internal microservices communication, or when the API needs to be exposed to a limited set of internal clients within a private network.

Let’s choose “REST API” for this article,

Click -> “Build

choose “New API

API name: Children Library API

choose API endpoint type as -> Regional

Let’s see what are the other types available,

  1. Regional
  2. Edge-optimized
  3. Private
  1. Regional Endpoint:
    • Purpose: A regional API endpoint is intended for clients that are geographically close to the region where the API is deployed.
    • Key Features:
      • The API is deployed in a specific AWS region.
      • Reduced latency for in-region clients, as requests don’t have to travel across regions.
      • Ideal for region-specific applications or when most of the API traffic originates from the same region.
    • Use Case: Best suited for localized applications where the users are predominantly in the same region as the AWS services.
  2. Edge-Optimized Endpoint:
    • Purpose: Edge-optimized API endpoints are designed for global clients to minimize latency by routing traffic through AWS’s global network of edge locations (part of the Amazon CloudFront content delivery network).
    • Key Features:
      • Automatically routes client requests to the nearest CloudFront Point of Presence.
      • Helps in reducing latency for global clients as requests are handled by edge locations closer to the user.
      • Useful for widely distributed client bases.
    • Use Case: Ideal for applications with a global user base, needing low latency across different geographical locations.
  3. Private Endpoint:
    • Purpose: A private API endpoint is used within an Amazon Virtual Private Cloud (VPC) to provide secure access to clients within the VPC or connected networks.
    • Key Features:
      • Accessible only within the VPC or through a secure connection (like VPN or Direct Connect) from on-premises networks.
      • Ensures that the API is not exposed to the public internet.
      • Integrates with AWS services within a VPC while maintaining privacy and security.
    • Use Case: Best for internal APIs, where the services are meant to be accessed only within a corporate network or a controlled environment.

Now, you know which one should choose.

Let’s select the Regional for demo purpose.

click “Create api

On the Resources page, you will see empty list of routes. So, now we are going to create routes.

Click “Create resource

We are going to create routes, so now let’s start with health api to check api is healthy or not.

In the Resource name -> health

Below the form, check CORS (Cross Origin Resource Sharing), this will create an additional OPTIONS method that allows all origins, and common header. (Enable this to better avoid any CORS related issues when accessing API from Front-end).

Click “Create resource

Now click on “/health” -> Let’s create a ‘GET’ method to check

You can see “OPTIONS” method is created.

Click -> Create method

Method Type:-> GET

Integration Type -> Lambda function

Check the toggle option to active related to “Lambda proxy integration

Lambda function -> check the region where we created the Lambda function and the correct Lambda function respectively.

Keep Default timeout as it is.

Similarly create resources for following APIs

  1. /book
    • CREATE – create / add a new book to the DB
    • GET (by ID) – retrieve a book by ID
    • PUT/PATCH – update a details of a book
    • DELETE – delete from DB
  2. /books
    • GET – get all the books info

Some of the things I can tell you might miss are

  1. Enable CORS for the resources
  2. Forgets to check the “Lambda proxy integration” to true

If you created resources, thereafter also you can Enable Cors, just click on any resource -> click ‘Enable CORS‘ under Resource Details.

Similarly, we need to do for other resources as well.

Now let’s deploy the API to the public world.

Click on “Deploy API”

In the Stage field Click “New Stage” then a new field will appeared as “Stage name

type whatever stage name you want: I am giving as “dev”

then click “Deploy”

Now, go to “Stages” in the sidebar,

Click on the “dev” stage. You will see the Invoke URL. Copy and check in the Postman for all the APIs we have created till now.

But. before testing we need to have the routes handling services in serverless where based on the routes serverless application will process the data whether its creating or updating or retieving data.

So, paste the following JavaScript code in Code source section in Lambda function we created.

Try to Insert a data into DB

Get by bookId:

Hope you enjoyed this article, We have just developed a Simple App with basic CRUD operations.

I am planning to implement a small sized working level application with Authentication which uses AWS Cognito, Uploading files to AWS S3 and using some AWS services.

Let’s see

Why to enable CORS?

Enabling CORS (Cross-Origin Resource Sharing) in API Gateway in AWS is crucial for controlling how your API is accessed from different origins, particularly web browsers. CORS is a security feature that allows you to specify which domains are permitted to access your API. It’s especially important for APIs that are called from web applications hosted on a different domain than the API itself.

Why Enable CORS:

  1. Browser Security: Modern web browsers enforce the same-origin policy, which prevents a web page from making requests to a different domain than the one that served the web page. CORS is a way for the server to tell the browser that it’s okay to allow a request from a different origin.
  2. Control Access: CORS allows you to specify which domains can access your API, giving you control over the consumption of your API resources.
  3. Avoid CORS Errors: Without proper CORS settings, browsers will block frontend applications from receiving responses from your API, leading to CORS errors.
  4. API Testing and Development: During development, your frontend and backend might be hosted on different servers (e.g., localhost for frontend and a separate domain for API), necessitating CORS for seamless integration testing.