Parathan Thiyagalingam

Sharing my learnings....

Deploying Node JS Application with AWS App Runner

AWS App Runner is a fully managed service provided by Amazon Web Services (AWS). It makes it easy for developers to quickly deploy containerized web applications and APIs, at scale and with no prior infrastructure experience required. It automates management tasks such as provisioning, scaling, and security, enabling developers to focus on writing code rather than managing infrastructure.

How It’s Useful:

  1. Simplicity and Speed: It simplifies the deployment process by abstracting away the underlying infrastructure. Developers can deploy their applications quickly by providing the source code or a container image.
  2. Scalability: App Runner automatically scales the application up or down based on traffic, ensuring that the application can handle peaks in demand without manual intervention.
  3. Integrated with AWS ecosystem: It seamlessly integrates with other AWS services, providing a comprehensive and secure environment for deploying applications.
  4. Cost-Effective: Pay only for the resources used without having to manage underlying servers, potentially reducing the cost of running web applications and APIs.
  5. No infrastructure management: It eliminates the need to provision, configure, or manage servers, load balancers, and scaling mechanisms.

Comparison with Other Providers:

  • DigitalOcean App Platform: Like AWS App Runner, DigitalOcean’s App Platform is a platform-as-a-service (PaaS) offering that allows developers to build, deploy, and scale apps quickly. While App Runner focuses on the AWS ecosystem, DigitalOcean’s App Platform provides a simpler, more straightforward pricing model and is often appreciated for its user-friendly interface and lower entry cost.
  • Heroku: Heroku is another popular PaaS that simplifies the deployment process. It’s known for its ease of use and was one of the pioneers in platform as a service. Heroku supports multiple programming languages and has a strong add-on ecosystem. Compared to AWS App Runner, Heroku may offer more flexibility regarding language and framework support but might not provide the same level of integration with AWS services or the same scale of infrastructure.
  • Google Cloud Run: This is Google’s fully managed compute platform that automatically scales your stateless containers. Google Cloud Run is similar to AWS App Runner in that it abstracts away the infrastructure, allowing developers to focus on their applications. Cloud Run is deeply integrated with Google Cloud services and offers both a fully managed environment and an Anthos-based environment for more control.

Deploying Apps Using AWS App Runner:

  1. Preparation:
    • Have a Docker container image ready in a registry (Amazon ECR or any public registry) or source code hosted in a supported repository (e.g., GitHub).
    • Ensure your application listens for HTTP requests on the port defined by the PORT environment variable.
  2. Creating a Service:
    • Go to the AWS App Runner console and choose to create a new service.
    • Select the source of your application (repository or container registry) and configure the settings (e.g., deployment method, instance configuration, and environment variables).
    • Configure the service settings, including the automatic deployment of source changes if desired.
  3. Deployment:
    • After configuring the service, deploy your application. App Runner will build and deploy the application, handling all the infrastructure, including networking, security, and scaling.
  4. Monitoring and Management:
    • Once deployed, you can monitor the application’s health and performance directly from the AWS console.
    • App Runner provides automatic scaling, but you can adjust settings and configurations as needed.

Demo of Deploying a Node JS Application using App Runner

Go to the App Runner Service Page in AWS Console -> click “Create an App Runner Service”

We will be shown up with 2 types of deployment methods.

  1. Docker Images: Using Amazon ECR (Elastic Container Registry), can use both Private or Public Images
  2. Source Code Repository (For this we must need to select the provider among Github & Bitbucket)

In this article, I will go with Github, (Select Source code repository -> Provider ->> Github)

For the Github connection, if we are new then we need to add our Github connection to App Runner to access the repositories from our account.

After providing the Authorization to the “AWS Connector” app, in the window it will ask to name the connection -> The name can be anything to distinguish easily from the GitHub account we authorized.

And then, we need to install the AWS Connector app to authorize the repositories.

After configuring the authorization part, we can see the list of the repositories we allowed the AWS Connector to access.

Then, we will select the repository we need for deployment and the branch that we need to get deployed.

We can configure the Source Directory, where if we are given any folder path, then the build and start commands that we will provide in the next setup will be executed in this directory. By default, the source directory is the root directory.

In the Deployment settings option, we can either select Manual or Automatic deployment. Where Automatic deployment will provide the option to trigger and deploy the app whenever the specific branch gets updated.

In the Next screen we will be shown to configure the Build settings:

Currently, we can deploy

  • Coretto
  • .NET
  • Go
  • NodeJS
  • Python
  • Php
  • Ruby

applications.

So, let’s go with Node JS.

If we have any specific configuration and have customized it from the repository level,

then we can select the ‘Use a configuration file’ option, which will read the ‘apprunner. yaml‘ file for configuration and do the build settings.

I will go with setting up everything right here,

so for the build command:

npm install

Start command:

node index.js

On the next page, we will be shown up to configure the service settings:

According to our application needs, we can customize the CPU and Virtual memory allocation along with setting up the Environment Variables.

We can set up the IAM policy, and Autoscaling configurations on this page. For now, let’s keep everything with default values and set the service first.

The next page will show all the configurations for review and if we are good with it let’s click “Create & deploy”

On the next page, we will be redirected to the app runner service detailed page. Where we can see a message that the app is currently being deployed. We can also see the logs of the app as well.

The following image shows what the app runner properties will look like. We will get a default domain or we can customize it with our domain as well.

Now, whenever we make changes to the branch that we configured to deploy, the deployment will triggered. That’s the beauty of the App Runner.

AWS App Runner is particularly useful for developers looking to deploy applications quickly without the hassle of managing infrastructure. It stands out for its integration with the AWS ecosystem and its ability to scale applications automatically. When compared to other services like DigitalOcean’s App Platform, Heroku, and Google Cloud Run, the choice often comes down to the specific needs of the project, cost considerations, and preferred cloud ecosystem. Each platform has its strengths, and the best choice depends on the requirements of the application you’re deploying.

Send SMS in NodeJS using Nexmo

Hi, As I’m a JS enthusiast; I love doing JavaScript development due to its easiness & lot of people there to help.

Today, I have come up to show you how to implement, message sending facility using NodeJS using Nexmo service provider.

A lot of people are interested in start-ups & need to grow their business using SMS promotions.

Let’s move to the coding portion.

First of all, you have to create Nexmo account: Click Here

Now, create a new folder with a name you want

Create package.json using

npm init

Now,

We have to install express & body-parser to the application to use.

npm install express body-parser — save

Then, create an index.js file in the root folder

const express = require('express');
//importing express to useconst

bodyParser = require('body-parser');
//importing body parser to get the body input const

app = express();
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: true }));

require('./controller.js')(app);

//requiring app Which is an express instance to use in controller file

const server = app.listen(3000);
//configure the server to run on port 3000

console.log("Server working on 3000")

Now, create the controller.js file in the root folder.

module.exports = function (app) {
	const Nexmo = require('nexmo');
	const nexmo = new Nexmo({
	        apiKey: Your_API_KEY,
	        apiSecret: Your_API_SECRET_KEY
	});

	const config = {
		number: YOUR_REGISTERED_MOBILE_NUMBER
	}

	app.post('/send', (req, res) => {
	//Setting endpoint of /send
	// Send SMS
	     nexmo.message.sendSms(
	     config.number,
	     req.body.toNumber,
	     req.body.message, { type: 'unicode'},
     (err, responseData) => { if (responseData)
     {
	     console.log(responseData)
     }
	});
});
}

Here we are using the app instance which we declared in the index.js.

Then, we are importing the Nexmo module & create an instance of the Nexmo module;

for apiKey & apiSecret you have to give your Nexmo account details.

In the config object, you have to put the registered mobile number for your Nexmo account as a value for the number attribute.

And, after that, a POST request with a “/send” endpoint.

Wrapping the nexmo.message.sendSms() method which is predefined in the Nexmo to send message inside the POST request.

Finally, we are console the message details in the terminal/command line.

Now, Start the server using

node index.js

And test it with Postman.

If you get Non White-listed Destination — rejected error.

You have to register it in your Nexmo Account.

To do that Click Here

Yes, you have done it.

Happy Coding Folks..!!

Annotation & their uses in Java

Hello, I’m newbie to Java world & so new to Spring Boot. So, I don’t have a prior experience with spring also. As a new one, I’m writing this to new people to Java. Experts, correct me in the comments if its wrong.

What is Annotation..?

a note by way of explanation or comment added to a text or diagram.

Above, is the dictionary explanation for Annotation.

In Java

Annotation is a tag that used for

  1. methods
  2. class
  3. interface

which adds more information about those for Java compiler / JVM(Java Virtual Machine).

You may have already experienced with some built-in Java function when you done with basic Java stuffs.

Like,

  1. @override — used to override the parent class method in sub class method
  2. @SuppressWarnings — used to suppress warnings issued by the compiler.
  3. @Deprecated — compiler prints warning because of the method is deprecated as it could be removed in the future versions. So that its better not to use such method.

Let’s have a look at Java Custom Annotation

Java Custom Annotation or Java User defined annotations are easy to use and create (That means even you and me can create an annotation according to our need).

@interface element is used to declare an annotation.

Like this

@interface parathan{}

to create annotation; annotation should have the following characters

  • method should not have parameters
  • should not throw any clause
  • may have default values
  • should return a class, method, enum, String like primitive data type

Type of Annotation

  1. Marker Annotation
  2. Single Value Annotation
  3. Multi-value Annotation
  4. Marker Annotation — Annotation that have no methods inside it.

Eg: @Deprecated @Override

@interface parathan{}

  1. Single-value Annotation — Annotation that have one method

@interface parathan{

int value();

}

Default value can be provided for it by following code snippet

@interface parathan{

int value() default 0;

}

Applying a Single Annotation in code

@parathan(value=10)

3.Multi-value Annotation — Annotation that has more method than one

@interface parathan{

int age();

String name();

String country();

}

Applying Multi Annotation can be as follows

@parathan(age=20,name=”Parathan Thiyagalingam”,country=”Sri Lanka”)

Built in Annotations used in Custom Annotations

  1. @Target
  2. @Retention
  3. @Inherited
  4. @Documented

1. @Target —

used to betoken to which type the annotation to be used

For that we have to import java.lang.annotation.ElementType

we use

@Target(ElementType.some_thing)

Here some_thing need to be replaced by the following keywords if you are using where the annotation need to be applied.

If you are going to use it for “class,interface,enumeration”

then

@Target(ElementType.Type)

If you are going to use it for methods

then

@Target(ElementType.METHOD)

Credit: Javatpoint.com

Eg to use for class and method is follow

@Target({ElementType.Type,ElementType.METHOD})

@interface parathan{

int age();

String name();

}

2. @Retention

Used to betoken for what level the annotation need to be available.

There are 3 level

  • SOURCE — this refers the source code, which the annotation will not be available in compiled class.
  • CLASS — this refers to the .class file, which the annotation will not available to JVM but available for java compiler. So, it will be in the class file.
  • RUNTIME — refers runtime, which is available to bothe java compiler & jvm.

Eg:

@Retention(RetentionPolicy.RUNTIME)

@Target(ElementType.TYPE)

@interface parathan{

int age();

String name();

}

The above snippet represents the annotation will be available at the run time & the annotation is targeted to the class.

3. @Inherited :

Normally the annotations are not inherited to sub classes to inherit the annotation to sub class @Inherited is used.

@Inherited

@interface parathan{ } //Now it will be available to subclass also

@interface parathan{ }

class MainClass{}

class Subclass extends MainClass{} //sub class extends the super class

As the annotation “parathan” is inherited;

the MainClass uses the annotation as the SubClass extends the MainClass,

therefore, the SubClass can also access the annotation

4.@Documented

It is used to include the annotation in the Java documentation.

The annotation I’m writing here, is because I’m posting REST API in Java using Spring Boot & MySQL as Parts.

So, there we are using a lot of annotations to make our tasks easy.

If I made any mistakes.. Please comment below. Or share with your friends.

Happy Coding Folks…!!

Split & Map a String in Java

Hi, everyone. I’m learning the basic concepts of Java since last month. So, while I’m learning I did some example by my own. In future post I’m planning to publish those code with you about what I learnt & how I did it.

Here, I want to show you how I split & mapped a string which in the following format.

key1=value1;key2=value2;key3=value3

This is my code

import java.util.*;

public class App {

    public static void main (String args[]) {

        Map<String, String> map = new HashMap<String, String>();
        String gotData = "key1=value1;key2=value2;key3=value3";

        if(gotData == **""**){
        	map = null;
	} else {

   		for(String keyValue : gotData.split(";")) {
        		String[] key = keyValue.split("=");
        		map.put(key[0], key[1]);
    		}
	}
        Map<String,String> finalOutput= map;

        System.**_out_**.println(finalOutput);

    }
}

Could you get it out…??

I’m explain it according to my knowledge…

I imported all java.util package modules to use “Map, Hash Map, put” keyword which are belong to the util package.

Then in the App class I declared a map of datatype Map.

For loop is used to loop until the string fully read by the program.

Inside that if else is used to avoid the “ArrayBoundExcaption” which is very common exception when using Map, Arrays in Java.

Splitting the string by “ ; ” and “ = ” and assigning the key to the o th index of Map & value to the 1st index of Map

This is what I got as the output

{key1=value1, key2=value2, key3=value3}

Screenshot of What I got as Output using Debugger mode

for o (zero(th Index

value1 assigned to key1

value2 assigned to key2

value3 assigned to key3

If you have another simple code Share it in the comment.

Thanks,

Uki — A life-changing determinant

The youth need to be enabled to become job generators from job seekers — A.P.J.Abdul Kalam

Uki is a Tamil word that means a Catalyst for the youth of Jaffna & Sri Lanka. It provides 6 months scholarship-based coding accelerator program for students who don’t get access to the state university or any other vocational education.

Information Technology is one of the fastest booming & most significant fields in Sri Lanka. Uki is a gateway for students who are fond of IT & technology.

Preparing the local youth by elevating their talents & skills to get an industrial internship experience in the Tech Companies around Sri Lanka is the root concept of creation behind Uki.

The youth is the hope of our future — Jose Rizal

As a student of Uki 2 Cohort, I have a responsibility to share my experience with society to engage more youth like me to get empowered/boosted & to be an IT professional or an entrepreneur in the future.

Web development plays a major role in IT jobs. In that way, Uki has a curriculum of giving a full-scholarship-based coding accelerator 6 months program for full-stack web development.

Full-stack web development means both Front-end & back-end development.

What is front-end development?

Front-end is what we observe in the browser. The page that is visible to a website user. To design it 3 languages are used.

  • HTML
  • CSS
  • JavaScript

In addition to these; there are some frameworks that help to design the front end like

  • React JS — a front-end framework of JavaScript
  • Sass — a preprocessor scripting language that compiled into CSS

were taught in Uki

What is the back end?

The back end indirectly supports the front end to make the website platform dynamic; that means we can make requests and get responses from a server and get/store data in the database.

In that way; the following languages were taught in Uki

  • Node JS, Express — a framework of Node JS
  • Mongo DB as a Database

Above the back-end, languages are the latest & fast growing technologies in web development.

As new technologies are taught here, students can get more opportunities in the IT industry.

Is coding is the only activity we did in Uki…???

No.

We have 2 clubs in Uki to bolster our future professional life.

They are

  1. Uki Fitness Club
  2. Uki Gavel Club

In the Fitness club, we practiced doing exercise, yoga & meditation.

Full-time coding may become an anxiety for programmers. To refresh our minds, to breathe new air in the morning & to keep us awake for the whole day fitness club helped us a lot.

Gavel club helped us a lot to strengthen our English communication & conversation skills & diminished our stage fear. Having Gavel club which is a youth version of Toastmasters developed the prepared & impromptu speech-delivering capabilities a lot.

We did projects such as:

English Inside: Every Monday & Thursday every student should speak English. If they do not obey they will have a punishment speech in Uki. At first time it looked so weird, but later on, it became easy just like speaking Tamil in our day-to-day life.

Another one provided by Gavel Club is Tech Talk. Tech Talk is a kind of session that would be done by a student per week on what technology s/he is special. He/She has to take a session by explaining through a presentation about any technology s/he knows. Through tech talk, we got to learn information about new technologies & practiced searching and preparing for our tech talk every week.

There is a huge opportunity for the students who graduate from Uki; because Uki teaches the students not to memorize or study in a conventional method.

It teaches a simple way of

Search & learn.!!

In addition to self-learning, we have practiced using some professional way of communication & programming tools such as Slack, Trello & Github. These are the latest tools which are used by popular tech companies around the world.

To get hands-on experience in the technologies we studied in Uki;

we did many in-class exercises, 3 Assignments with VIVA &

4 fun works such as

  1. Getting the API key from the Sound cloud website & creating an online music website with our own creativity
  2. Using CRUD operation — a note-taking app to create a note/edit that note or to delete that note
  3. Tic Tac Toe Game using React JS framework
  4. Authentication site with login & sign up facilities using React JS

We maintained a blog to write what we have studied until now in Uki. The habit of maintaining a blog on what we have studied helped a lot to revise the subjects we studied & our writing skills.

here is my blog: http://parathanuki2.blogspot.com/

The instructors helped & cared for us to achieve the state that we are now.

We were introduced to online code-learning sites such as

From the above-mentioned site, we started to taste the programming fruit.

Having workshops on tech-related topics from people who are well-recognized in that field is a good opportunity for us to get deep exposure to that field.

The Main Workshop we participated in during Uki Cohort 2 are

  • Agile — Scrum — Project Management
  • Ionic / Angular JS — Android Development
  • Product Management
  • Project Management
  • IoT Workshop

We got huge knowledge of Scrum methodologies to pre-plan & implement on our final project.

The IOT workshop was a fabulous experience to learn how the new world is. All the things in this world could be connected to the internet. For example, if the fridge door isn’t locked well… a security alert could be sent to the house member’s mobile.

As a growing IT enthusiast in the technology world, this kind of workshop helps a lot to improve ourselves to emphasize our interest in programming & to learn new technologies.

Meeting MBA students from Stanford University gave us a new dimension. We shared our final project ideas & had an excellent conversation with a joyful presentation style and got many ideas to enhance the idea more vividly.

To enrich the student’s English knowledge; students are enrolled in an English course at British Council Jaffna for about 3 months.

Through the British Council, we become familiar with presentation & email writing skills for professional life. A lot of exercises to improve grammar & speaking skills were enriched through this course.

Normally in the British Council; Jaffna people will step backward to study as the cost is high. But Uki offered it in a scholarship; funded by the people who care to help to develop Jaffna & Sri Lanka.

The important key to becoming an IT professional is having a developed personality.

To get such qualities Uki gave us a personal coaching session on

  • Basic Finance
  • Business model and Business model innovation
  • Legal and Professional ethics
  • Sales, Marketing, and promotion

to mold ourselves for the future professional life in the Software industry.

In personal coaching, we learned to behave professionally in the work environment by doing activities like debates, role plays, learning to construct our CV & maintaining our LinkedIn profile.

I can say Uki turned my life to a different path.

Every Uki student is grateful to the well-wishers & the people who initiated & supported. Especially Yarl IT Hub, The instructors — Vithushan, and Dharshi, Personal coaching — Mathangie & ITEE Foundation.

Simple CRUD APP with Dynamo DB – API Gateway – Node JS Serverless with Lambda

Imagine you have a big box of Legos. With these Legos, you can build lots of different things, like houses, cars, and even spaceships. But to build these things, you need a place to keep your Legos and enough pieces to make whatever you want.

Now, think of Amazon Web Services as a huge, virtual box of Legos that lives on the internet. Instead of building houses or cars, people use AWS to build websites, store information, and do lots of other important computer work.

Just like you can share your Legos with friends, people can use AWS to share their creations with others over the internet. They can build big websites, store lots of pictures and videos, or even make new apps for phones and tablets.

The cool part is that you don’t need to have all the computer pieces at your home. AWS has a super big computer system that anyone can use over the internet. So, it’s like having a never-ending supply of Legos that you can use anytime you want to build something new and exciting!

AWS provides different services. Following are some of the example for the services provided by AWS:

  1. EC2 (Elastic Compute Cloud): Think of EC2 like renting a powerful computer on the internet. You can use this computer to run websites, applications, or any software you need. It’s like having a really good computer that you can use for a while, without actually buying it.
  2. S3 (Simple Storage Service): This is like a huge online storage locker. You can put all sorts of things in it like photos, videos, and documents. It’s always there when you need it, and you can store as much as you want and get it back whenever you need.
  3. RDS (Relational Database Service): Imagine a big, super-organized filing cabinet for all your data. RDS lets you store and manage lots of information (like customer details or product information) in a very organized way. It’s like having a librarian who keeps all your data neat and tidy.
  4. Lambda: This is a bit like having a magic genie. You tell Lambda a small task you want done, like resizing a photo when you upload it, and Lambda does it for you automatically. You don’t need to worry about how it gets done; Lambda takes care of it.
  5. CloudFront: Think of CloudFront as a super-fast delivery truck for your website. It makes sure that when people visit your website, they get everything they need (like pictures and videos) really quickly, no matter where in the world they are.
  6. IAM (Identity and Access Management): IAM is like the security guard for your AWS services. It helps you manage who is allowed to do what. Like, who can open your storage locker, or who can use your rented computer. It makes sure that only the people you trust can use your AWS stuff.

These are just a few of the many services AWS offers, but they’re some of the most popular and widely used. Each service is designed to make specific tasks easier and more efficient for businesses and developers.

In this article, I am going to build a Simple CRUD application using the AWS Services such as Dynamo DB, API Gateway & Lambda along with IAM Role.

Imagine you have a small library of children’s storybooks. You want to create a simple system where kids can check online to see if their favorite book is available. Here’s how the AWS services fit in:

  1. DynamoDB: This is like your digital catalog of books. Each book in your library is listed here with details like the title, author, and whether it’s currently available or borrowed.
  2. Lambda: Think of Lambda as your helpful librarian. When a kid wants to know if a book is available, Lambda checks the DynamoDB catalog for them. It looks up the book and tells them if it’s there or not.
  3. API Gateway: This is the library’s front desk. Kids come here (through a website or app) and ask, “Is ‘Where the Wild Things Are’ available right now?” The API Gateway takes this question and passes it to Lambda, the librarian, to get the answer.
  4. IAM Role: The IAM Role is like a set of rules for who can do what in the library. It makes sure that only the librarian (Lambda) can access the book catalog (DynamoDB) and that only the front desk (API Gateway) can talk to the librarian. This keeps everything secure and running smoothly.

So, when a kid asks if a book is available, their question goes to the API Gateway, which then asks Lambda. Lambda checks the DynamoDB catalog and then tells the kid if the book is available or not, all in just a few seconds. This makes it really easy for kids to find out if they can borrow their favorite storybook!

Now, let’s take this as a example and build a small CRUD Application.

Let’s first start with creating a Dynamo DB table.

Go to AWS Console and Search for Dynamo DB service,

Click on Tables in the Sidebar -> Click “Create table

Under Table Name: type whatever table name you want to have

-> children_library

For Partition Key, it should be the Primary key of the table and it should be unique as well.

-> book_id

Keep other things as default values and click the Create table button.

This will take some time to create the DB.

Now, let’s move to creating a Lambda function, which will be act as a backend based on the API request it will retrieve or write data to our created Dynamo DB.

Search and click on “Lambda” service:

Click on “Functions” in the sidebar -> click “Create function

Select -> Author from Scratch

Function name -> serverless-api

Runtime -> Node JS 16.x

In the Change default execution role

Check -> “Create a new role from AWS policy templates

Role name -> type any meaning full name. -> serverless-api-role

Keep others as default and click “Create function

This will take up a few seconds to create the function

Since we created a role for this, Let’s go to the IAM service and Assign Role Permission to access Dynamo DB & Cloud Watch Logs (Will do a seperate post on Cloud Watch later) and click “Roles” in sidebar.

Click on the role that you assigned for Lambda,

In the Permissions tab -> Permissions policies -> click “Add permissions” -> Attach policies ->

Search and apply the following Policies

  1. AmazonDynamoDBFullAccess
  2. CloudWatchLogsFullAccess

Why to add these policies to this role:

  1. AmazonDynamoDBFullAccess: This policy provides full access to Amazon DynamoDB. When you assign this policy to a Lambda function, it grants the function permissions to perform all actions on DynamoDB. This includes creating, reading, updating, and deleting data in DynamoDB tables. This is crucial if your Lambda function needs to interact with a DynamoDB database, such as retrieving data for processing, storing results, or maintaining a database.
  2. CloudWatchLogsFullAccess: This policy grants full access to Amazon CloudWatch Logs. CloudWatch is a monitoring and observability service. By assigning this policy to a Lambda function, you enable the function to create and manage log groups and log streams in Amazon CloudWatch Logs. This is essential for logging the operation of your Lambda function, which helps in monitoring its performance, debugging issues, and keeping records of its activity. Effective logging is a key aspect of understanding and maintaining the health of applications running on AWS.

Now lets create API Endpoints in API Gateway.

Search for “API Gateway“,

Click on APIs in sidebar:

Click “Create API

A new page will be shown up with “Choose an API type“:

Following types are available at the time of writing this article:

  1. HTTP API
  2. WebSocket API
  3. REST API
  4. REST API Private

Let’s walk through a small level of understanding about these type of API:

  1. HTTP API:
    • Purpose: HTTP APIs in AWS are primarily used for building scalable and cost-effective RESTful APIs.
    • Key Features:
      • Designed for low-latency, cost-effective integrations.
      • Support for OAuth 2.0 and JWT (JSON Web Token) authorization.
      • Ideal for serverless workloads using AWS Lambda.
      • Simplified routing and payload formatting.
    • Use Case: Best suited for simple, HTTP-based backend services where routing, authorization, and scalability are important.
  2. WebSocket API:
    • Purpose: WebSocket APIs are used to create real-time, two-way communication applications.
    • Key Features:
      • Enables server to push real-time updates to clients.
      • Maintains a persistent connection between client and server.
      • Supports messaging or chat applications, real-time notifications, and more.
    • Use Case: Ideal for applications that require real-time data updates, such as chat applications, real-time gaming, or live notifications.
  3. REST API:
    • Purpose: REST APIs (Representational State Transfer) in AWS are for creating fully-featured RESTful APIs. They offer more features compared to HTTP APIs
    • Key Features:
      • Supports RESTful interface and can work with JSON, XML, or other formats.
      • Advanced features like request validation, request and response transformations, and more.
      • Detailed monitoring and logging capabilities.
    • Use Case: Suitable for APIs that require complex API management, extensive integration capabilities, and legacy system support.
  4. REST API Private:
    • Purpose: Private REST APIs are designed to provide RESTful API features accessible only within your Amazon Virtual Private Cloud (VPC).
    • Key Features:
      • Restricted access: Only available to applications within your VPC or those using a VPN to your VPC.
      • Integration with AWS services within a VPC without exposing them to the public internet.
      • All features of REST APIs but with private network access.
    • Use Case: Best for internal microservices communication, or when the API needs to be exposed to a limited set of internal clients within a private network.

Let’s choose “REST API” for this article,

Click -> “Build

choose “New API

API name: Children Library API

choose API endpoint type as -> Regional

Let’s see what are the other types available,

  1. Regional
  2. Edge-optimized
  3. Private
  1. Regional Endpoint:
    • Purpose: A regional API endpoint is intended for clients that are geographically close to the region where the API is deployed.
    • Key Features:
      • The API is deployed in a specific AWS region.
      • Reduced latency for in-region clients, as requests don’t have to travel across regions.
      • Ideal for region-specific applications or when most of the API traffic originates from the same region.
    • Use Case: Best suited for localized applications where the users are predominantly in the same region as the AWS services.
  2. Edge-Optimized Endpoint:
    • Purpose: Edge-optimized API endpoints are designed for global clients to minimize latency by routing traffic through AWS’s global network of edge locations (part of the Amazon CloudFront content delivery network).
    • Key Features:
      • Automatically routes client requests to the nearest CloudFront Point of Presence.
      • Helps in reducing latency for global clients as requests are handled by edge locations closer to the user.
      • Useful for widely distributed client bases.
    • Use Case: Ideal for applications with a global user base, needing low latency across different geographical locations.
  3. Private Endpoint:
    • Purpose: A private API endpoint is used within an Amazon Virtual Private Cloud (VPC) to provide secure access to clients within the VPC or connected networks.
    • Key Features:
      • Accessible only within the VPC or through a secure connection (like VPN or Direct Connect) from on-premises networks.
      • Ensures that the API is not exposed to the public internet.
      • Integrates with AWS services within a VPC while maintaining privacy and security.
    • Use Case: Best for internal APIs, where the services are meant to be accessed only within a corporate network or a controlled environment.

Now, you know which one should choose.

Let’s select the Regional for demo purpose.

click “Create api

On the Resources page, you will see empty list of routes. So, now we are going to create routes.

Click “Create resource

We are going to create routes, so now let’s start with health api to check api is healthy or not.

In the Resource name -> health

Below the form, check CORS (Cross Origin Resource Sharing), this will create an additional OPTIONS method that allows all origins, and common header. (Enable this to better avoid any CORS related issues when accessing API from Front-end).

Click “Create resource

Now click on “/health” -> Let’s create a ‘GET’ method to check

You can see “OPTIONS” method is created.

Click -> Create method

Method Type:-> GET

Integration Type -> Lambda function

Check the toggle option to active related to “Lambda proxy integration

Lambda function -> check the region where we created the Lambda function and the correct Lambda function respectively.

Keep Default timeout as it is.

Similarly create resources for following APIs

  1. /book
    • CREATE – create / add a new book to the DB
    • GET (by ID) – retrieve a book by ID
    • PUT/PATCH – update a details of a book
    • DELETE – delete from DB
  2. /books
    • GET – get all the books info

Some of the things I can tell you might miss are

  1. Enable CORS for the resources
  2. Forgets to check the “Lambda proxy integration” to true

If you created resources, thereafter also you can Enable Cors, just click on any resource -> click ‘Enable CORS‘ under Resource Details.

Similarly, we need to do for other resources as well.

Now let’s deploy the API to the public world.

Click on “Deploy API”

In the Stage field Click “New Stage” then a new field will appeared as “Stage name

type whatever stage name you want: I am giving as “dev”

then click “Deploy”

Now, go to “Stages” in the sidebar,

Click on the “dev” stage. You will see the Invoke URL. Copy and check in the Postman for all the APIs we have created till now.

But. before testing we need to have the routes handling services in serverless where based on the routes serverless application will process the data whether its creating or updating or retieving data.

So, paste the following JavaScript code in Code source section in Lambda function we created.

Try to Insert a data into DB

Get by bookId:

Hope you enjoyed this article, We have just developed a Simple App with basic CRUD operations.

I am planning to implement a small sized working level application with Authentication which uses AWS Cognito, Uploading files to AWS S3 and using some AWS services.

Let’s see

Seeding in the context of Database

In the context of databases, “seeding” typically refers to the initial population or insertion of data into a database. This process is often done when setting up a new database or when adding data to a database table for the first time. Seeding is a crucial step in ensuring that the database has the necessary data for testing, development, or production use. The term “seed data” refers to the predefined set of data that is inserted into the database during the seeding process. This data can include default values, sample records, or any information required for the database to function correctly.

Seeding is commonly used in various scenarios:

Database Initialization: When creating a new database, seeding is used to populate tables with initial data. This is especially important for systems that require certain default values or configurations to operate correctly.

Testing and Development: In testing and development environments, seeding is employed to provide a consistent and reproducible dataset for testing and debugging purposes. It helps developers work with a known set of data while building and testing applications.

Demo Environments: Seeding is also useful when setting up demo or showcase environments. It ensures that the database has enough data to showcase the features and functionalities of an application.

Data Migration: During data migration processes, seeding may be used to transfer initial data from one database to another. The process of seeding often involves writing scripts or using database migration tools to insert the necessary data. It helps establish a baseline dataset that aligns with the requirements of the application or system utilizing the database.

Why to enable CORS?

Enabling CORS (Cross-Origin Resource Sharing) in API Gateway in AWS is crucial for controlling how your API is accessed from different origins, particularly web browsers. CORS is a security feature that allows you to specify which domains are permitted to access your API. It’s especially important for APIs that are called from web applications hosted on a different domain than the API itself.

Why Enable CORS:

  1. Browser Security: Modern web browsers enforce the same-origin policy, which prevents a web page from making requests to a different domain than the one that served the web page. CORS is a way for the server to tell the browser that it’s okay to allow a request from a different origin.
  2. Control Access: CORS allows you to specify which domains can access your API, giving you control over the consumption of your API resources.
  3. Avoid CORS Errors: Without proper CORS settings, browsers will block frontend applications from receiving responses from your API, leading to CORS errors.
  4. API Testing and Development: During development, your frontend and backend might be hosted on different servers (e.g., localhost for frontend and a separate domain for API), necessitating CORS for seamless integration testing.

PYTHONPATH in a nutshell

I was running a Django project, and I had to use a GitHub repo as a third-party dependent in the project by cloning it and incorporating it inside the project. But, for some reason, I couldn’t run the project because of the Module Not Found error which caused by the usage inside third-party folder. So, I had to add that third-party folder to PYTHONPATH variable to run the project.

This is a brief article about what is PYTHONPATH and its intro:

Imagine your computer is like a big library with lots of books, which are like the programs and files on your computer. Now, when you want to run a Python program, it needs to use some special books (libraries or modules) to work correctly.

PYTHONPATH is like a list of favorite aisles in the library where these special books are. When you tell Python, “Hey, I want to run this program,” Python looks at the PYTHONPATH to know where to find the special books (modules) it needs to run your program.

Why to Set PYTHONPATH:

  1. To Find Special Books Easily: Sometimes, you have your own special books (modules) not in the usual aisles. By adding their locations to PYTHONPATH, you tell Python exactly where to find them.
  2. To Share and Use Different Books: If you have many different projects or programs, they might need different special books. You can change PYTHONPATH for each one, so Python knows where to look for the right books for each project.

Usage of PYTHONPATH:

When you run a Python program, and it needs a special book (module or library), it will start looking in the aisles listed in PYTHONPATH. If it finds the right book there, it will use it! If not, it might tell you it can’t find it or look in the default places it knows.

In summary, PYTHONPATH is used to guide Python on where to find the extra special books (modules and libraries) it needs to run your programs, especially when those books aren’t in the usual places!

Django Views are Synchronous, You can’t have Asynchronous things inside views

python=django-learning

The core issue I was facing was related to the inherent nature of how web servers and Django handle requests and responses, especially in relation to performing time-consuming tasks like sending emails and Slack messages within a Django view.

Understanding the Problem:

  1. Synchronous Nature of Django Views:
    • Django views, by default, operate synchronously. This means that when a request hits a Django view, the server processes the request in a linear, blocking fashion. It executes each line of code in the view one by one, and the response to the client (frontend) is not sent back until the entire view function completes its execution.
  2. Frontend Loading Time:
    • When you include operations like sending emails or Slack messages directly in your Django view, these operations are executed as part of the request-processing pipeline. Since these tasks can be time-consuming (network I/O, waiting for external API responses), they block the completion of the view function. As a result, the response is delayed until these tasks are finished, leading to increased loading times on the frontend.
  3. Asynchronous Functions in Django:
    • Even if I make certain functions asynchronous within the Django view (using async def and await), it doesn’t change the fundamental synchronous nature of the view’s response cycle. The view still waits for all operations, including the asynchronous ones, to complete before sending back a response. This means that making functions asynchronous inside a view won’t reduce the frontend loading time if these tasks are part of the request-response cycle.
  4. Event-Driven Architecture Approach:
    • You mentioned an alternative approach using an event-driven architecture, where an event is published to a queue, and a separate consumer service handles the notifications. This method is indeed a way to offload time-consuming tasks from the request-response cycle. The view would quickly publish an event to the queue and then immediately respond to the frontend, significantly reducing loading times. However, this approach introduces complexity, such as setting up and managing a message queue and a consumer service, and it might incur additional costs.
  5. Other Solutions – Background Task Processing:
    • Another common solution in Django is to use a background task processing system like Celery. With Celery, you can quickly dispatch time-consuming tasks to be handled asynchronously outside of the request-response cycle. This allows your view to respond immediately, while the tasks like sending emails or Slack messages are processed in the background.

Conclusion:

  • In summary, simply converting functions within a Django view to asynchronous won’t solve the issue of frontend loading times when performing time-consuming tasks within the view. The response to the client is still delayed until these tasks complete.
  • To effectively reduce frontend loading times, you need to offload these time-consuming tasks from the request-response cycle. This can be achieved using an event-driven architecture with a message queue and a consumer service or by implementing a background task processing system like Celery.
« Older posts