Uploaded by tiniumoh

nodejs 100722

advertisement
Let’s learn Node JS
Today we will be discussing more about Node JS in this article.
Node JS is an open source, cross platform, run time environment for executing JavaScript code
outside of the browser. We often use Node to build back-end services. It is ideal for building highly
scalable, data-intensive, and real-time apps. Can be best for a Input Output heavy application but
not for a CPU heavy application. NodeJS is built on the Chrome’s JavaScript engine which is V8
engine, and the non-blocking I/O model makes it lightweight and efficient.
Advantages of using NodeJS
- Great for prototyping and agile development
- Superfast and highly scalable
- Uses JavaScript everywhere
- Cleaner and more consistent codebase
- Large ecosystem of open-source libraries
- Ability to use single programming language from one end of the application to the other end
Well build ‘npm’ package manager and it’s large number of reusable modules.
Features of Node.js
The features that make Node.js the primary choice of software architects are listed below.
· Asynchronous and Event-Driven- the Node.js library’s APIs are all asynchronous, meaning they
don’t block. It basically implies that a Node.js server never waits for data from an API. After
accessing an API, the server moves on to the next one, and a notification system in Node.js called
Events assists the server in receiving a response from the previous API request.
· Node.js library is highly quick in code execution since it is built on Google Chrome’s V8
JavaScript Engine.
· Node.js employs a single threaded paradigm with event looping, making it very scalable. In
contrast to typical servers that establish restricted threads to process requests, the event
mechanism allows the server to reply in a non-blocking manner and making it more scalable.
Node.js makes use of a single threaded application that can handle a considerably higher number
of requests than traditional servers like Apache HTTP Server.
· There is no data buffering in Node.js apps. The data is simply produced in chunks by these apps.
· Node.js is distributed under the MIT license.
Where to Use Node.js?
Following are the areas where Node.js is proving itself as a perfect technology partner.
•
I/O bound Applications
•
Data Streaming Applications
•
Data Intensive Real-time Applications (DIRT)
•
JSON APIs based Applications
•
Single Page Applications
Windows installation
To install Node.js, run the MSI file (node-v6.3.1-x64.msi) and follow the steps. The Node.js
package under C:\Program Files\nodejs is used by default by the installer. The installer should set
the PATH environment variable on Windows to the C:\Program Files\nodejs\bin directory. To
make the modification take effect, restart any open command prompts.
Check the installation: File Execution
Create a main.js file on your computer (Windows) using the following code.
* Hello, World! program in node.js */
console.log(“Hello, World!”)
Now execute main.js file using Node.js interpreter to see the result
$ node main.js
If everything is fine with your installation, this should produce the following result
Hello, World!
What is the event loop in Node?
An event loop is an event-listener which functions inside the NodeJS environment and
is always ready to listen, process, and output for an event. An event can be anything from
a mouse click to a keypress or a timeout.
Let’s look at the parts of a Node.js application. The three key components of a Node.js application
are as follows:
· Import necessary modules To load Node.js modules, we utilize the require directive.
· Create a server similar to Apache HTTP Server that will listen for client requests.
· Read the HTTP request made by the client, which can be a browser or a console, and return the
response The server built in the previous step will read the HTTP request made by the client,
which may be a browser or a console, and return the response.
Creating Node.js Application
Step 1 — Import Required Module
What is a Module in Node.js?
Modules are to be the same as JavaScript libraries.
A set of functions you want to include in your application. There are built-in modules as well as
created modules where we can create our own modules.
Let’s see how can we include a module to our project.
To include a module, use the require() function with the name of the module
var http = require(‘http’);
Step 2 — Create Server
We use the newly formed http instance to invoke the http.createServer() function to create a server
instance, which we then bind to port 8081 using the server instance’s listen method. Pass it a
function with request and response arguments. Create an example implementation that returns
“Hello World” every time.
http.createServer(function (request, response) {// Send the HTTP header// HTTP Status: 200 :
OK// Content Type: text/plainresponse.writeHead(200, {'Content-Type': 'text/plain'});// Send
the response body as "Hello World"response.end('Hello World\n');}).listen(8081);// Console
will print the messageconsole.log('Server running at http://127.0.0.1:8081/');
The above code is enough to create an HTTP server which listens, i.e., waits for a request over
8081 port on the local machine.
Step 3 — Testing Request & Response
Let’s put step 1 and 2 together in a file called main.js and start our HTTP server as shown below
var http = require("http");http.createServer(function (request, response) {// Send the HTTP
header// HTTP Status: 200 : OK// Content Type: text/plainresponse.writeHead(200, {'ContentType': 'text/plain'});// Send the response body as "Hello World"response.end('Hello
World\n');}).listen(8081);// Console will print the messageconsole.log('Server running at
http://127.0.0.1:8081/');
Now execute the main.js to start the server as follows −
$ node main.js
Congratulations, you have your first HTTP server up and running which is responding to all the
HTTP requests at port 8081.
Do you know what is Node package manager?
npm is the world’s largest Software Registry. Open-source developers
use npm to share software. npm is installed with Node.js All npm packages are defined in files
called package.json.
The content of package.json must be written in JSON.
Example
{
“name” : “foo”,
“version” : “1.2.3”,
“description” : “A package for fooing things”,
“main” : “foo.js”,
“keywords” : [“foo”, “fool”, “foolish”],
“author” : “John Doe”,
“licence” : “ISC”
}
npm can manage dependencies.
npm can (in one command line) install all the dependencies of a project.
Dependencies are also defined in package.json.
Now I hope you have a better idea about Node. In a nutshell, Node.js is a popular programming
environment that can be used for building high-scale applications that need to support multiple
concurrent requests. Single-threaded non-blocking I/O makes it an excellent choice for both realtime and data streaming applications, too.
How to Implement Sessions using NodeJS and MongoDB
When I started learning about sessions and their meaning it wasn’t easy to understand what is the
correct flow and how things should be configured to make sessions work.
As a Full Stack Developer at
Joonko
, we had to start working on a new version of the company’s product and we had to implement an ability for
users to sign in to the system and do some actions, and for that, we had to implement sessions in our
backend server.
How do sessions work?
When a user logs in to a system and makes a login request to the server, the server will create a
session and store it on the server’s local cache or in external storage (database).
Then, after finishing this process, the server responds to the client with a cookie.
The cookie usually contains a unique ID which is the session id and will be stored on the client
browser for a limited time.
From this point, every request that will be sent to the server will include the browser cookies
including the session cookie.
About express-session:
express-session is an HTTP server-side framework that can be used to create and manage a
session middleware, this package will be our main thing for this tutorial.
Set up a project:
Use the following command to initialize a NodeJS project:
npm init -y
Next, use the following command to install express and express-session:
npm install express express-session connect-mongo
Next, create a new file named app.js and add the following code to it:
const express = require('express');
const session = require('express-session');
const MongoStore = require('connect-mongo');
const app = express();
app.use(session({
name: 'example.sid',
secret: 'Replace with your secret key',
httpOnly: true,
secure: true,
maxAge: 1000 * 60 * 60 * 7,
resave: false,
saveUninitialized: true,
store: MongoStore.create({
mongoUrl: 'MongoDB URL should be here'
})
}));
app.listen(4000, () => {
console.log("App listening on port 4000")
})
Explanation:
1. We’ve created a new instance of express server and initialized a variable called app with it.
2. Next, we’ve added a session middleware to the application server with the following
configuration:
- name: The name of the cookie that will be stored on the client-side.
- secret: This is the secret used to sign the ID cookie.
- httpOnly: Specifies the value of httpOnly Set-Cookie attribute.
- secure: Specifies the value of secure Set-Cookie (note that this one will work only if you are
using HTTPS on your environment).
- maxAge: Specifies the cookie’s max-age in milliseconds, I’ve already set it to 7 hours.
- resave: Forces the session to be saved back to the session store.
- saveUnitialized: Forces a session that is “uninitialized” to be saved to the session store.
- store: Uses MongoDB store which was implemented by connect-mongo NPM library and which
you can view here and describes the connection between it.
3. Initializing a server and listening to port 4000.
Next, all we have to do is to implement a few routes that will use the middleware we created
above.
In the following examples, I’ll create very simple routes just so you can understand better the
usage of this middleware.
<POST> Create User Route
app.post('/', async (req, res, next) => {
const {name} = req.body;
req.session.user = {
name,
isLoggedIn: true
}
try {
await req.session.save();
} catch (err) {
console.error('Error saving to session storage: ', err);
return next(new Error('Error creating user'));
}
res.status(200).send();
})
The following route handles a POST request with a name in the request’s body, it enriches the
request session’s user data and saves the updated session into the MongoDB sessions collection
using the req.session.save method.
<POST> Logout User Route
app.post('/logout', async (req, res, next) => {
try {
await req.session.destroy();
} catch (err) {
console.error('Error logging out:', err);
return next(new Error('Error logging out'));
}
res.status(200).send();
})
The following route handles a POST request and everything it does is to destroy the current user’s
session — which means that it will remove the session from the MongoDB sessions collection as
well as delete the session cookie from the user’s browser.
<GET> User Full Name Route
app.get('/name', (req, res) => {
let name;
if (!req.session) {
return res.status(404).send();
}
name = req.session.user.name;
return res.status(200).send({name});
})
The following route handles GET request and return the full name of a user from his session data
if it exists.
If you followed all the steps and did it all correctly, then it means that you will be able to access
req.session object which will include everything you included in it the last time you used the save
method.
Uploading a Node.js app on Cpanel using Namecheap
In this article, I will explain how to deploy a Node.js Application to Cpanel using Namecheap. This
article will cover how to organise your app, create a node.js App within Cpanel and the potential
changes to your code that you will need to do.
File Structure
In order to create a Node.js Application, you need to place all of the code onto Cpanel’s file system.
Creating a good file structure is a good way to organise all of your applications.
Personally, I create a folder called “nodejs” and store all my node.js apps in subfolders within that
folder. For this example, we will name our project “nodeA
pp”, inside that folder will put all the source code (not node_modules folder, we will do this
later). Below is how this should look!
File Structure for node.js apps
As you can see above we also have another folder that is inside of the public_html directory. For
this example, we are using the main domain folder. However, if you are using a different domain
or a subdomain place the “nodeAppApi” folder within that domains directory. This will then use
that domain! We have created a “nodeAppApi” folder to store our htaccess file. We do this to
separate our node Api from the React Application code.
Inside the .htaccess write the following code:
RewriteEngine off
This will stop the Apache server from rewriting or redirecting any of the requests that go to the
node app.
Great! We have set up the file structure for our node app. Next we will create the node app through
Cpanel.
Creating a Node.js App in Cpanel
Navigate your way to the “Node.js” section of Cpanel. You can do it from the “Main Dashboard” >
“Setup Node.js App” Button. Press that button and it will send you straight there. Next press
“Create Application”!
The next page is going to look something like this
As you can see there are a few things we need to fill in or change in order to create the application.
Below explains what each of these sections do!
Node.js Version — Set the version of Node.js that your application requires.
Application Mode — Select either “Development” or “Production” (Recommended to select
“Production”)
Application Root — This is where your app is located in the file system. In this example its
“nodejs/nodeApp”
Application URL — This is the domain that the app will use. For this example it is
“example.com/nodeAppApi” this will then use the .htaccess that we set up.
Application Start file — This is the file name of your app (e.g. server.js | app.js | index.js)
Once you have filled out all of the information press “Create”. Once you have done that you will see
something that looks like this
At the bottom you can see a button saying “Run NPM Install”. Press this and it will create that
node_modules folder for us. (You need to have a package.json inside the application root to do
this.)
The final thing before we are done is to change our ports. This took me a while to figure out as I
kept getting EACCESS errors saying that I didn’t have permission to use those ports. So in order to
fix this we don’t set a port but instead we use this…
process.env.PORT
Why do we do this? This is because port handling is handled further upstream by the Apache
Severs. Meaning we don’t need to define a port, it will automatically sort this out for us.
Testing Your App
Congratulations, you have successfully uploaded your app to a live server. However, to make sure
that our app is working correctly you can use postman to test and make sure it is working. Once
you are sure it’s all working correctly you are all done!
I hope this was helpful for you!
Reduce Method in JavaScript
Concise explanation of reduce method in Js
Photo by Zdeněk Macháček on Unsplash
1. Accepts a callback function and an optional second parameter.
2. Iterates through an array.
3. Runs a callback on each value in the array.
4. The first parameter to the callback is either the first value in the array or the optional second
parameter.
5. The first parameter to the callback is often called “accumulator”.
6. The returned value from the callback becomes the new value of the accumulator.
array.reduce(function(accumulator, nextValue, index, array){
// statements
}, optional second parameter)
Parameters:
•
Callback function: The function that runs on each value of the given array.
•
Accumulator: First value in the array or the optional second parameter given.
•
nextValue: Second value in the array or the first value if the optional parameter is passed.
•
Index: Each index in the array.
•
Array: The entire array on which reduce method is applied.
Examples:
1. Print the sum of all the elements of the array using the reduce method.
Reduce Method in JavaScript
Concise explanation of reduce method in Js
Photo by Zdeněk Macháček on Unsplash
1. Accepts a callback function and an optional second parameter.
2. Iterates through an array.
3. Runs a callback on each value in the array.
4. The first parameter to the callback is either the first value in the array or the optional second
parameter.
5. The first parameter to the callback is often called “accumulator”.
6. The returned value from the callback becomes the new value of the accumulator.
array.reduce(function(accumulator, nextValue, index, array){
// statements
}, optional second parameter)
Parameters:
•
Callback function: The function that runs on each value of the given array.
•
Accumulator: First value in the array or the optional second parameter given.
•
nextValue: Second value in the array or the first value if the optional parameter is passed.
•
Index: Each index in the array.
•
Array: The entire array on which reduce method is applied.
Examples:
1. Print the sum of all the elements of the array using the reduce method.
2. Output: 15
3. Explanation:
Accumulator Next Value Returned Value
1
2
3
3
3
6
6
4
10
10
5
15
•
Since no second parameter was given so accumulator’s initial value will be the first value of the
array i.e. 1.
•
The second value is 2 and we add both the values and return the sum.
•
This returned sum is going to be the next value of the accumulator.
•
These steps will be repeated till the last element of the array.
•
The total sum of the array is returned.
2. Adding a second parameter.
let
arr
=
[1,
2,
3,
4,
5];
// using annonymous function as a callback
arr.reduce(function(acc, next) {
return acc + next;
}, 10);
// Using ES6 Arrow function as a callback
arr.reduce((acc, next) => acc + next, 10);
Accumulator
Next Value
Returned Value
10
1
11
11
2
13
13
3
16
16
4
20
20
5
25
•
In this example, we are given an additional parameter in the reduce function. (10)
•
So, the initial value of the accumulator is 10, and the next value is 1. Return the sum of both of
these values.
•
The sum is going to be the new value of the accumulator.
•
These steps will be repeated till the last element of the array.
•
The total sum will be returned.
3. Using the reduce method in strings.
let names = ['Sheldon', 'Raj', 'Penny', 'Amy', 'Howard', 'Leonard', 'Bernadette'];
// using annonymous function as a callback
names.reduce(function(acc, next){
return acc += ` ${next}`;
}, 'TBBT characters are')
// Using ES6 arrow function as a callback
names.reduce((acc, next) => {
return acc += ` ${next}`;
}, 'TBBT characters are')
Accumulato
Next Value
r
Returned
Value
'TBBT characters are'
'Sheldon'
'TBBT characters are Sheldon'
'TBBT characters are Sheldon'
'Raj'
'TBBT characters are Sheldon Raj'
'TBBT characters are Sheldon Raj'
'Penny'
'TBBT characters are Sheldon Raj Penny'
'TBBT characters are Sheldon Raj Penny'
'Amy'
'TBBT characters are Sheldon Raj Penny Amy'
'TBBT characters are Sheldon Raj Penny Amy'
'Howard'
'TBBT characters are Sheldon Raj Penny Amy Howard'
'TBBT characters are Sheldon Raj Penny Amy Howard'
'Leonard'
'TBBT characters are Sheldon Raj Penny Amy Howard Leonard'
'TBBT characters are Sheldon Raj Penny Amy Howard
Leonard'
'Bernadette'
'TBBT characters are Sheldon Raj Penny Amy Howard Leonard
Bernadette'
•
In this example, we are given an additional parameter in the reduce function. (‘TBBT
characters are’)
•
So, the initial value of the accumulator is ‘TBBT characters are’, and the next value is ‘Raj’.
Return the sum of both of these strings.
•
This new string is going to be the new accumulator.
•
These steps will be repeated till the last string of the array.
•
A single string with all array elements concatenated will be returned.
4. Create a function to add only odd numbers in the array.
let
nums
=
[1,
2,
3,
4,
5];
// function declaration and annonymous function as a callback function
function sumOddNumbers(arr) {
return arr.reduce(function(acc, next) {
if(next % 2 !== 0) {
acc += next;
}
return acc;
}, 0)
}
// Using ES6 functions
const sumOddNumbers = arr => {
return arr.reduce((acc, next) => {
if(next % 2 !== 0) {
acc += next;
}
return acc;
}, 0);
}
sumOddNumbers(nums);
Create a Node.js App with Express
Use Express and EJS to render a website.
Photo by Juanjo Jaramillo on Unsplash
Express makes it much simpler to create a server and render different routes. Alongside EJS it
makes web development a breeze. Express handles files, routing, parsing, and much more.
First, set up the Node.js project with npm init, or copy the package.json file provided at the end of
the article.
Then, run npm install to install the dependencies in the package.json file. In the app.js file, import
express to start setting up the server and the routes.
const express = require('express')const app = express()
You also need to set up EJS as a template engine, and link to the views folder to let express know
where the view templates are located.
app.set('view engine', 'ejs')app.set('views', 'views')
A simple way of setting up a route is using app.get() for GET requests.
app.get("/", (req, res) => { res.render("index")})
Finally, set up the server on port 3000.
app.listen(3000)
Now you can go to localhost:3000 and see the result. The second route should be located at
localhost:3000/ejs.
Source code
const express = require('express')
const app = express()
const testRoutes = require("./routes/test")
app.set('view engine', 'ejs')
app.set('views', 'views')
app.use("/test", testRoutes)
app.get("/", (req, res) => {
res.render("index")
})
app.get("/ejs", (req, res) => {
res.render("index")
})
app.listen(3000)
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
</head>
<body>
<h1>Home page</h1>
</body>
</html>
{
"name": "node-mongoose",
"version": "1.0.0",
"description": "",
"main": "app.js",
"dependencies": {
"ejs": "^3.1.8",
"express": "^4.18.1",
"nodemon": "^2.0.18"
},
"devDependencies": {},
"scripts": {
"start": "nodemon app.js",
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC"
}
const express = require("express")
const router = express.Router()
router.get("/test", (req, res) => {
res.send("Test")
})
module.exports = router
And there you have it. Thank you for reading.
Why Promise for Javascript?
Let’s start to learn Javascript promise in a simple way❤
Promise for Javascript
“Using JavaScript Error objects to reject promises can capture the call stack for
troubleshooting”
― Daniel Parker
Before learning about Promises in Javascript
In real life when we promise to anyone, first of all, we should know about a particular person and
person's behavior. Same as when we deal with promises in javascript we should be aware
of callback. Why do we need Promises in Javascript?
Callback:- A callback is a function passed as an argument to another function. This technique
allows a function to call another function. A callback function can run after another function has
finished.
const getCallBack = callback => {
setTimeout(() => {
callback ({getCall: 'Complete Code Example' })
}, 1000)
}
getCallBack(todo => {
console.log(getCall.text)
})
We have learned about callbacks but the biggest problem with callbacks is that they do not scale
well for even moderately complex asynchronous code. The resulting code often becomes hard to
read, easy to break, and hard to debug.
Soooo that reason we need promisessssssssssss �
PREREQUISITES :
•
Follow How to Install Node.js and Create a Local Development Environment.
•
A basic understanding of coding in JavaScript
•
Basic understanding of callback, synchronous and asynchronous javascript
What are Promises?� ♂
Talking about promises in Bollywood Movies (it can happen only in parallel universe� )
Simran: Hi Rahul! Can you run to the village market and get me itemA for my dad?
Rahul: Sure ❤
Simran: While you are doing that, I will makeitemB(asynchronous
you let me know whether you could find itemA
operation).
But make sure
(promise return value).
Rahul: What if you are not at home when I am back, Mrs Simran?
Simran: In that case, send me a whatsapp message saying you are back and have the gift
itemA for me (success
callback).
If you don’t find it, call me immediately Rahul(failure
callback).
Rahul: Sounds Great! See you in a while.
The Promise an object represents the eventual completion (or failure) of an asynchronous
operation and its resulting value. In a simple way, above example, Simran and Rahul both are
working asynchronously but the outcome can be failure or success and it is necessary that the
report eventually returns its resulting value (Success
How to Write a Promise in JavaScript
or failure).
We can write a promise in our JavaScript by calling the Promise class and constructing an object
like this:
index.js, code example -1
Designing an object is not the only way we can define a promise, though. We can also use the builtin Promise API to achieve the same thing: Please refer below code
index.js, code example -1 & Promise API
Lifecycle of Javascript Promise
•
Pending: initial state, neither fulfilled nor rejected.
•
Fulfilled: meaning that the operation was completed successfully.
•
Rejected: meaning that the operation failed.
A promise is considered to be settled when it is either in the Resolved or Rejected state.
Let’s talk about the above flowchart diagram: The promise the object returned by the new
Promise
constructor has these internal properties:
— initially "pending", then changes to either "fulfilled" when resolve is called
or "rejected" when reject is called.
•
state
•
result
— initially undefined, then changes to value when resolve(value) called
or error when reject(error) is called.
Rejected Promises of JavaScript�
A Promise can also be rejected. Most of the time, rejections happen because JavaScript triggered
some kind of error while running the Asynchronous code. In such a case, it calls
the reject() function instead.
Here is a simple and we will take example of how a promise can get rejected:
reject in promises
Can we think that why this promise gets rejected? If we said “because amit is not true", then
congratulations!
The promise in the above code sample will resolve to a rejection after a timeout of three seconds
because the (amit)? statement resolves to false, which will encounterreject.
How to Use the catch()
and then() The method
in JavaScript
then(): We can define two callback functions that we want to call when a promise is either
fulfilled or rejected. These functions are defined inside a nested then() method:
catch(): The catch() the method will always be called out when we encountered an error at any
point along the promise chain:
then() and catch()
Since shubhPromise will eventually resolve to rejection, and the function defined in the
nested then() will be ignored. Instead, the error handler in catch()
How Chaining promises Work?
The .then() method takes up to two arguments; the first argument is a callback function for the
resolved case of the promise, and the second argument is a callback function for the rejected case.
Each .then() returns a newly generated promise object, which can optionally be used for chaining;
Chain promises
Processing continues to the next link of the chain even when a .then() lacks a callback function
that returns a Promise object. Therefore, a chain can safely omit every rejection callback function
until the final .catch().
Ending Up
JavaScript promises are a very powerful feature that helps us run asynchronous code in
JavaScript. In most, if not all, interviews for roles that use JavaScript, our interviewer will
probably ask a question about promises.
In this article, I have explained what a promise is in simple terms, and I’ve shown its basic
practical usage with some code examples.
An Introduction to Node.js
What is Node.js?
Node.js is a platform built on Chrome’s JavaScript runtime for easily building fast, scalable
network applications. Node.js uses an event-driven, non-blocking I/O model that makes it
lightweight and efficient, perfect for data-intensive real-time applications that run across
distributed devices. Node.js is a runtime environment and library for running JavaScript
applications outside of a web browser. Node.js uses an event-driven, non-blocking I/O model that
makes it lightweight and efficient. It has become popular for real-time applications such as chat,
gaming, and data streaming because of its scalability to handle large numbers of simultaneous
connections.
Node.js is an open-source runtime environment that was built on Chrome’s V8 JavaScript engine.
It uses an event-driven, non-blocking I/O model that makes it lightweight and efficient. Node.js
applications are written in JavaScript and can be run on Windows, Mac OS X, and Linux operating
systems.
Node.js has become extremely popular with developers because it allows them to write code once
and run it on multiple platforms. Additionally, the Node Package Manager (NPM) provides access
to over 300,000 packages of reusable code that can be used in Node applications
What is a runtime browser
When JavaScript is achieved withinside the browser, the browser isn’t the important participant
this is compiling your JavaScript. Instead, browsers depend upon JavaScript Engines. The Google
Chrome browser makes use of the V8 Engine. The Safari Browser makes use of the Nitro Engine.
Mozilla Web Browser makes use of the SpiderMonkey engine. Needless to say, if we need to
construct standalone JavaScript packages that exist out of doors of net browsers, we want a
JavaScript Engine that still exists out of doors of net browsers. We want a participant which could
honestly execute our JavaScript. Node.js is a JavaScript Engine — or you may name it a runtime
environment. This device permits us to construct JavaScript packages which could exist out of
doors of net browsers.
Assuming all that makes sense, you now know of the most basic tool you need to build standalone
applications.
Now we can talk about npm.
npm
Npm is an initialism that stands for Node Package Manager — emphasis on package deal manager.
If it isn’t obvious, it manages programs. You may also listen humans name them modules,
libraries, or perhaps frameworks. Regardless, those programs may be deployed for your initiatives
to make that coding procedure one billion instances easier. You can borrow features from specific
programs to acquire effects quicker and cleaner. For example, in my maximum latest project, I
used a package deal that may examine information from Google Sheets and keep that information
in a JavaScript object. Without that package deal, I could now no longer understand the way to do
it myself. Or you could use the Gulp package deal to minify your CSS and JavaScript documents.
(Minify way to lessen the report size). Or there’s additionally many programs to transform SCSS
documents to CSS. There are such a lot of programs — constructed with the aid of using the
community — that do many specific things. It’s appropriate to test with new programs. So as
formerly mentioned, programs make the coding procedure one billion instances easier. If you need
to look for programs, you could test out their legitimate website. There is likewise a GitHub web
page that showcases the Top one thousand programs which can be being used. Maybe so that it
will be greater useful so that you recognize why humans love programs. And to apply those
programs, there’s the documentation for everything. It’s very easy. Check out the legitimate npm
web sites for the documentation. I additionally understand lots of those programs additionally
have an legitimate GitHub repository with documentation. Try it out for yourself, and perhaps
Node.js turns into your new fine friend.
Advantages of node.js
Build composable web applications
Don’t construct net monoliths. Use Bit to create and compose decoupled software program
additives — on your favourite frameworks like React or Node. Build scalable frontends and
backends with a effective and fun dev experience. Bring your group to Bit Cloud to host and
collaborate on additives together, and significantly pace up, scale, and standardize improvement
as a group. Start with composable frontends like a Design System or Micro Frontends, or discover
the composable backend.
How to Send Emails From Node.js Server Using Gmail?
There are some instances in our real-life experience where we may receive emails as verifications
or notifications from certain websites that we use. As we go through it we might not wonder how
these things happen because these kinds of mechanisms are not a big deal nowadays. But when it
comes to a situation whether we can implement such a mechanism then we will start to scratch our
head. But you don’t need to worry about it anymore, here I am going to show how to send Email
from the Node.js server using Gmail and Nodemailer. Nodemailer is an npm module that helps us
to send emails from Nodejs application easily.
So let's start…
First of all, you have to install the ‘nodemailer’ module in your application using the command
‘npm i nodemailer’ . Then you have to pass it to a variable using require keyword.
let nodemailer = require('nodemailer');
Thereafter, we have to initialize a transport service object that holds the details of the email service
we are going to use and the account credentials of the mail address we are going to send the mail
from. Here I am going to do with Gmail.
let transporter = nodemailer.createTransport({
service:'gmail',
auth: {
user: 'sender@gmail.com',
pass: 'password'
}
});
Now we have to initialize another object for the message configuration which consists of the details
of the sender’s address, receiver’s address, and the message we are going to send. If you want you
can even add further fields(refer to the link here) inside this object. I am keeping this as simple as
it is for your easy understanding.
let mailOptions = {
from: 'sender@gmail.com',
to: 'reciever@gmail.com',
subject: 'subject',
text: 'The message'
};
Ok then now we are all set to do the final touch which is sending the mail.
transporter.sendMail(mailOptions,(error,info)=>{
if (error) {
console.log(error);
} else {
console.log('Email sent: ' + info.response);
}
});
For that, we have to call a built-in function of this nodemailer ‘s transport service which I have
named as transporter and pass the message configuration object(mailOptions) and a callback
to notify the error or success of the transaction as arguments.
If you have followed everything as I have mentioned until this step then you can run your file now.
Unfortunately, you will get an error printed in the console, thanks to Google's security policies.
Google has set a security policy that if there are any third-party source that doesn’t meet the
security standards they expect tries to sign-in to the account then it won’t allow to sign in from
those sources, So it automatically blocks the sign-in for the user. Here if you look at the
transporter object it consists of the account credentials that will be used to sign in when we run
our file.
So to stop this blocking we have to allow non-secure apps to access our Gmail account. You can do
it by going to this link and doing the necessary change as shown below. Now if you run your js file
it should be able to run without any error and your mail should reach the receiver’s address
successfully.
This security fact might sound scary or make you feel unsafe to do. If you are concerned about
doing this with your own account then you can create another Gmail account without your
personal details and use its credentials instead of using your personal account. Anyways it’s up to
you. So try it out and all the best.
Introduction to Node Js and Express Js
Node JS
Node.js is an open-source and cross-platform JavaScript runtime environment for server-side and
network application. It was developed by Ryan Dahl in 2009.
Why Node Js?
•
Node.js uses asynchronous programming.
•
Node.js is extremely fast and great for building real-time web applications because of its
asynchronous nature.
•
Node.js is event-driven and uses a non-blocking I/O model which makes Node.js lightweight
and efficient.
•
Node.js eco-system NPM (Node Package Manager) is the largest in the world for open source
libraries.
Node.js is mainly made up of JavaScript and event loop. The event loop is basically a program that
waits for events and dispatches them when they happens. Below diagram shows how Node.js
works,
Installing Node.js to your computer
Install Node.js to your computer by downloading the LTS setup from below link.
https://nodejs.org/en/
After installing, you can check whether it is properly installed by typing below command in the
command prompt.
node -v
If you correctly installed Node.js to your local machine, above command should return Node.js
version that is installed.
Web Frameworks
Common web-development tasks are not directly supported by Node,js itself. If you want to add
specific handling for different HTTP verbs (e.g. GET, POST, DELETE, etc.), separately handle requests
at different URL paths ("routes"), serve static files, or use templates to dynamically create the
response, Node.js won’t be of much use on its own. For this you will either need to write the code
yourself, or you can avoid reinventing the wheel and use a web framework.
Express Js
Express is a minimal and flexible Node.js web framework that provides a robust set of features for
web and mobile applications.
Why Express JS?
•
Easy to configure and customize.
•
Robust API makes routing easy
•
Express Js Includes various middleware modules which you can use to perform additional
tasks on request and response.
•
Easy to integrate with different template engines like Jade, Vash, EJS etc.
•
Express Js Allows you to define an error handling middleware.
•
Express Js has a MVC like structure.
Hello world using Express JS
1. create new folder and, cd into the folder with your terminal.
2. Do an npm
init
, and create a package.json file in the folder.
3. Install express by enteringnpm
install express
in terminal.
4. create an app.js file in the the folder and copy the following code,
const express = require('express'); // import express module
const app = express(); // create an Express application
const port = 5000;/*
route definition, callback function that will be invoked whenever there is an HTTP GET
request with a path relative to the site root callback function takes a request and a
response object as arguments nd calls send() on the response to returns the string "Hello
World by Express!".
*/
app.get('/', (req, res) => {
res.send('Hello World by Express!')
});// starts up the server on a specified port(5000)
app.listen(port, () => {
// log a comment to the console
console.log('Example app listening on port 5000')
});
5. Enter node
app.js
in terminal to start the application.
6. Open a browser and go to the URL http://localhost:5000. It will display ‘Hello world by
Express!’ in the browser.
That’s all for this basic introduction on Node & Express. Hope this blog was helpful to you.
How to Resolve a Promise from Outside in JavaScript
To resolve a promise from outside in JavaScript, assign the resolve callback to a variable defined
outside the Promise constructor scope, then call the variable to resolve the Promise. For example:
let promiseResolve;
let promiseReject;const promise = new Promise((resolve, reject) => {
promiseResolve = resolve;
promiseReject = reject;
});promiseResolve();
Now while would we need to do something like this? Well, maybe we have an operation A
currently in progress, and the user wants another operation B to happen, but B must wait for A to
complete. Let’s say we have a simple social app where users can create, save and publish posts.
index.html
<!DOCTYPE html>
<html>
<head>
<title>Resolving a Promise from Outside</title>
</head>
<body>
<p>
Save status:
<b><span id="save-status">Not saved</span></b>
</p>
<p>
Publish status:
<b><span id="publish-status">Not published</span></b>
</p>
<button id="save">Save</button>
<button id="publish">Publish</button>
<script src="index.js"></script>
</body>
</html>
Users can save and publish posts.
What if a post is currently being saved (operation A) and the user wants to publish the post
(operation B) while saving is ongoing?. If we don’t want to disable the “Publish” button when the
save is happening, we’ll need to ensure the post is saved before publish happens.
index.js
// Enable UI interactivity
const saveStatus = document.getElementById('save-status');
const saveButton = document.getElementById('save');
const publishStatus = document.getElementById(
'publish-status'
);
const publishButton = document.getElementById('publish');
saveButton.onclick = () => {
save();
};
publishButton.onclick = async () => {
await publish();
};let saveResolve;
let hasSaved = false;function save() {
hasSaved = false;
saveStatus.textContent = 'Saving...';
setTimeout(() => {
saveResolve();
hasSaved = true;
saveStatus.textContent = 'Saved';
}, 3000);
}async function waitForSave() {
if (!hasSaved) {
await new Promise((resolve) => {
saveResolve = resolve;
});
}
}async function publish() {
publishStatus.textContent = 'Waiting for save...';
await waitForSave();
publishStatus.textContent = 'Published';
return;
}
The key parts of this code are the save() and waitForSave() functions. When the user clicks
"Publish", waitForSave() is called. If the post has already been saved, the Promise returned
from waitForSave() resolves immediately, otherwise it assigns its resolve callback to an external
variable that will be called after the save. This makes publish() wait for the timeout in save() to
expire before continuing.
Publish doesn’t happen until after save.
We can create a Deferred class to abstract and reuse this logic:
class Deferred {
constructor() {
this.promise = new Promise((resolve, reject) => {
this.reject = reject;
this.resolve = resolve;
});
}
}const deferred = new Deferred();// Resolve from outside
deferred.resolve();
Now the variables to resolve/reject a Promise and the Promise itself will be contained in the
same Deferred object.
We can refactor our code to use this class:
index.js
// Enable UI interactivity
// ...const deferredSave = new Deferred();
let hasSaved = false;function save() {
hasSaved = false;
saveStatus.textContent = 'Saving...';
setTimeout(() => {
deferredSave.resolve();
hasSaved = true;
saveStatus.textContent = 'Saved';
}, 3000);
}async function waitForSave() {
if (!hasSaved) await deferredSave.promise;
}async function publish() {
// ...
}
And the functionality will work as before:
The functionality works as before after using the Deferred class.
Updated at: codingbeautydev.com
Every Crazy Thing JavaScript Does
A captivating guide to the subtle caveats and lesser-known parts of JavaScript.
Sign up and receive a free copy immediately.
What is the new Object.hasOwn() method and why should
we use it instead of
the Object.prototype.hasOwnProperty() method
Object.hasOwn()
is intended as a replacement for Object.hasOwnProperty(). In this article, I’ll
explore their differences and why should one use Object.hasOwn() from now on.
JavaScript
Object.hasOwn()
is a new static method which returns true if the specified object has the specified
property as its own property. If the property is inherited, or does not exist, the method returns
false. The hasOwnProperty() method also returns a boolean indicating whether the object has the
specified property as its own property.
So, for example:
const person = { name: 'John' };
console.log(Object.hasOwn(person, 'name'));// true
console.log(Object.hasOwn(person, 'age'));//
falseconsole.log(person.hasOwnProperty('name'));// true
console.log(person.hasOwnProperty('age'));// falseconst person2 = Object.create({gender:
'male'});
console.log(Object.hasOwn(person2, 'gender')); // false
console.log(person.hasOwnProperty('gender')); //false
// gender is not an own property of person, it exist on the person2 prototype
So, after looking Object.hasOwn() and Object.hasOwnProperty() in action, they seem quite the same.
So why should we use Object.hasOwn() over the Object.hasOwnProperty() ? Well, because it also
works for objects created by using Object.create(null) and for objects that have overridden the
inherited hasOwnProperty() method. Although it’s possible to solve these kind of problems by
calling Object.prototype.hasOwnProperty.call(<object
reference>, <property name>)
object, Object.hasOwn() overcomes these problems, hence is preferred.
Let’s check out some examples:
1. Overriding the inherited hasOwnProperty()
let person = {
hasOwnProperty: function() {
return false;
on an external
},
age: 35
};
console.log(Object.hasOwn(person, 'age')); // true
console.log(person.hasOwnProperty('age')); // false
2. Objects created by using Object.create(null)
let person = Object.create(null);
person.age = 35;
if (Object.hasOwn(person, ‘age’)) {
console.log(person.age); // true
//works regardless of how the object was created
}if (person.hasOwnProperty(‘age’)){ // throws error —
function
console.log(‘hasOwnProperty’ + person.age);
}
person.hasOwnProperty is not a
I hope this article helped you understand the benefit of using the Object.hasOwn() over
the hasOwnProperty() method in JavaScript. If you found this article helpful, I would appreciate
getting some applause below. (:
What is Promises In Javascript and type
Promises are objects in JavaScript that returns a value that can be resolved or a reason for not
being resolved.
States of promise:
There are 3 states of promise:
1. Fulfilled
2. Rejected
3. Pending
Working of promises:
The initial state of a promise will be a pending state. The function Promise will take two
parameters resolve and reject as input. If the promise is successful then it calls the callback
function resolve and if the promise is not successful then calls the callback function forreject.
If a resolved value is being returned then .then() is executed and if reject is returned then .catch()
is executed.
Creating a new Promise
A Promise object is created using the new keyword and its constructor Promise. This constructor
takes a function, called the "executor function", as its parameter. That function in itself should take
two functions as parameters. The first parameter resolve is called when the asynchronous task
completes successfully and returns the results of the task as a value. The second
parameter reject is called when the task fails and returns the reason for failure, which is typically
an error object.
const fakeFetch=(msg,shouldRejectOrNot)=>{
return new Promise((resolve,reject)=>{
setTimeout(()=>{
if(shouldRejectOrNot){
reject("error from server")
}
resolve(`Promise successful:${msg}`)
},3000);
})
}
fakeFetch("srishti").then(res=>console.log(res));
//if nothing is passed then shouldRejectOrNot is false
//Output : Promise successful:srishti fakeFetch("srishti",true).then(res=>console.log(res));
//Output : error from server
Static methods of promise
1.Promise.all()
Waits for all the promises to be resolved or any to be rejected.
If the returned promise resolves, it is resolved with an array of all the values from the resolved
promises, in the same order as defined.
const promise1 = Promise.resolve(3);
const promise2 = 42; Promise.all([promise1,promise2]).then(value=>console.log(value));
//Output:[3,42]
If it rejects, it is rejected with the reason from the first promise as seen in the example below.
const promise1 = Promise.reject("promise.all() error");
const promise2 = 42; const promise3 = Promise.reject();
Promise.all([promise1,promise2,promise3]).then(value=>console.log(value));
//Output: promise.all() error
2.Promise.all settled()
The Promise.allSettled() the method returns a promise that resolves after all of the given
promises have either been fulfilled or rejected, with an array of objects that each describes the
outcome of each promise.
const promise1 = Promise.resolve(3);
const promise2 = new Promise((resolve, reject) =>
setTimeout(reject, 100, "foo") );
const promises = [promise1, promise2]; Promise.allSettled(promises).then((results) =>
results.forEach((result) =>
console.log(result.status)) );
//Output: fulfilled
//
rejected
3.Promise.any()
Promise.any()
takes an iterable of Promise objects. It returns a single promise that resolves as soon
as any of the promises in the iterable fulfills, with the value of the fulfilled promise. If no promises
in the iterable fulfill (if all of the given promises are rejected), then the returned promise is
rejected.
const promise1 = Promise.reject(0);
const promise2 = new Promise((resolve) => setTimeout(resolve, 100, 'quick'));
const promise3 = new Promise((resolve) => setTimeout(resolve, 500, 'slow'));
const promises = [promise1, promise2, promise3]; Promise.any(promises).then((value) =>
console.log(value));
// Output: "quick"
4.Promise.race()
The Promise.race() method returns a promise that fulfills or rejects as soon as one of the promises
in an iterable fulfills or rejects, with the value or reason from that promise.
const promise1 = new Promise((resolve, reject) =>
{
setTimeout(resolve, 100, 'one');
});const promise2 = new Promise((resolve, reject) => {
setTimeout(resolve, 500, 'two');
});Promise.race([promise1, promise2]).then((value) => {
console.log(value);
// Both resolve, but promise1 is faster });
// expected output: "one"
5.Promise.reject()
The Promise.reject() method returns a Promise object that is rejected with a given reason.
function resolved(result)
{
console.log('Resolved');
}
function rejected(result)
{
console.error(result);
} Promise.reject(new Error('fail')).then(resolved, rejected);
// expected output: Error: fail
6.Promise.resolve()
Returns a new Promise object that is resolved with the given value. If the value has a then method,
the returned promise will “follow” that then, adopting its eventual state; otherwise, the returned
promise will be fulfilled with the value.
Promise.resolve('Success').then(function(value) {
console.log(value); // "Success"
}, function(value) {
});
So this is all about promises and its methods. I hope you got a better understanding of promises.
Thank you for keeping reading. If you liked the article, please hit the
button; if you want to see
more articles follow me on medium .
If you have any doubts or suggestions � when going through this practical, please leave us a
comment below .
See you in upcoming articles . Keep in touch with medium. ��
Node.js Event Loop
“Loop goes on and on and on and on and on”
Suppose you are driving on a one-lane road with tons of cars all lined up. As long as all the cars are
moving with an appreciable speed there will be no problem at all. But what if a car needs to halt on
the road for a few seconds? Every car behind that car will have to wait and this will cause
inconvenience for everyone.
How is this related to the node.js event loop? Let’s find out.
N
ode.js
Node.js is an open-source and cross-platform JavaScript runtime environment.
A Node.js application is single threaded which means that there is just one thing happening at a
time just like the one-lane road in the example above. So if you have tons of clients requesting your
server then each of them will have to wait for their turn because the server is serving one client at a
time using a single thread.
Non-blocking I/O
Now suppose a client requested a file read task for a huge file which is going to take too long and
due to single threading the event loop will have to wait for this process to finish before executing
any other JavaScript code. This means that any further execution is blocked until the file read
operation is completed just like the whole traffic is blocked if a car needs to halt for a few seconds.
But, this is not what we observe while working with Node. It is asynchronous and
can perform non-blocking I/O operations, like reading from the network, accessing a database
or the filesystem. It is like, if one car wants to halt then it is raised above in the air and asked to
wait there while all the other cars can continue to move.
Event loop
The event loop is an endless loop, which waits for tasks, executes them and then sleeps until
it receives more tasks.
It is initialized by node when the application starts. It is going to manage all the requests that the
server receives.
The JavaScript code in node.js is executed by the V8 engine which is written in C++. C++ has a
special library module called libuv which is used to perform asynchronous operations and
manage a special thread pool called the libuv thread pool.
Let’s dive deeper into the working of the event loop with a series of steps.
A client makes a request to the server. The event loop will check if the request is synchronous or
asynchronous.
1. Synchronous request: The C++ code, under the hood, will execute any synchronous request
then and there and return the response to the client.
2. Asynchronous request: If any asynchronous request is made then the event loop will check
if there is any C++ primitive available to complete the task with the help of C++ libuv library.
3. If any C++ primitive is available then the event loop will assign the task to that primitive and
the C++ code will complete the task in the main thread itself.
Else the task is assigned to background threads in the thread pool. This thread pool is
composed of four threads which are used to delegate operations that are too heavy for the event
loop.
4. The event loop offloads operations to the system kernel whenever possible. When one of these
operations completes, the kernel responds back to Node.js so that the appropriate callback may
be added to the poll queue to eventually be executed.
Phases of event loop
The event loop has six phases each of which has a queue of callbacks to execute. These six phases
create one cycle, or loop, which is known as a tick. The event loop executes the callbacks of a
specific phase until the queue has been exhausted or the maximum number of callbacks has been
executed. The event loop will then move to the next phase in order.
Between each run of the event loop, Node.js checks if it is waiting for any asynchronous I/O or
timers and shuts down cleanly if there are not any.
Following are the phases of the event loop which it enters in the respective order.
1. Timers: Suppose you want to execute a code after 100ms with the help of setTimeout(). The
event loop will say, “OK, I am registering your request and I will not execute your code before
100ms for sure. I will execute it after 100ms as soon as possible.”
2. Pending callbacks: This phase executes callbacks for some system operations such as types
of TCP errors. For example if a TCP socket receives ECONNREFUSED when attempting to
connect, some *nix systems want to wait to report the error. This will be queued to execute in
the pending callbacks phase.
3. Idle, prepare: This phase is used internally by Node.
4. Poll: The event loop will execute all the callbacks in poll queue synchronously. If the poll
queue was already empty then the event loop will look for requests in the next phase. If there
are no requests in the fourth phase then the event loop will wait for callbacks to be added in the
poll queue and execute them.
Once the poll queue is empty the event loop will check for timers whose threshold is over and
if it finds any then it will go back to the timers phase and execute the callback for those timers
which are over.
5. Check: The code in this phase is executed using setImmediate() function just after the poll
phase. If the event loop finds that the poll queue is empty then it will not wait for poll events
and jump directly to the check phase and execute the code there.
6. Close callbacks: If a socket or handle is closed abruptly (e.g. socket.destroy()), the ‘close’
event will be emitted in this phase. Otherwise it will be emitted via process.nextTick().
Summary
The node.js event loop is single threaded and hence it can perform one task at a time. If a time
consuming task is requested from the event loop then it will offload this task to the kernel and the
worker threads and continue to work on other requests. There are certain phases of the event loop
which it traverses in a fixed order and hence it is able to perform asynchronous and non-blocking
tasks.
Why Would We Use Node.js?
One of the most widely used programming languages is JavaScript. Professional developers have
selected the powerful Node.js runtime environment as the most often utilized technology. Node.js
is a JavaScript runtime that is event-driven. Node is an excellent environment for building
efficient network applications, and it offers a lot of potential uses for JavaScript development.
Node.js is a bundled version of Google’s V8 JavaScript engine. The basic idea behind Node.js is to
employ non-blocking, event-driven I/O to keep data-intensive real-time applications running
across dispersed devices light and efficient.Node.js is a free, open source server environment that
allows you to construct server-side web applications using the JavaScript programming
language(backend).
Node.js may be used on a variety of systems (Windows, Linux, Unix, Mac OS X, etc.) We can
create a quick and scalable web application with Node.js. Asynchronous programming
(performing several jobs at the same time) is a unique feature of Node.js, as opposed to
synchronous programming (doing one task at a time) found in many server-side programming
languages like PHP and Ruby.
Node.js includes a large collection of pre-built packages that will save you time. NPM is in charge
of these libraries (Node Package Manager)
NPM: The Node Package Manager
When talking about Node.js, one feature that should not be overlooked is built-in support for
package management via NPM, a program that gets installed with every Node.js installation by
default. NPM modules are comparable to Ruby Gems in that they are a collection of publicly
available, reusable components that can be installed easily via an online repository and provide
version and dependency management.
The npm website has a complete list of packaged modules, which may be viewed using the npm
CLI tool that comes with Node.js. Anyone can publish a module that will be listed in the npm
repository, and the module ecosystem is available to anyone.
Some of the most useful npm modules today are:
express : Express.js — or just Express — is a Sinatra-inspired Node.js web development
framework that is the de-facto standard for the vast majority of Node.js apps available today.
mongodb and mongojs : MongoDB wrappers in Node.js provide an API for MongoDB object
databases.
lodash : The JavaScript utility belt is a set of tools for working with JavaScript. Underscore
started the game, but was eventually defeated by one of its two rivals, owing to superior
performance and modular implementation.
Installing Node JS
Node.js can be downloaded from the official website: [https://nodejs.org/en/]
The Node.js website also has extensive documentation: [https://nodejs.org/en/docs/]
Once the installation is complete, use this command to verify the version installed.
$ node -v
# v16.9.1
Hello World
The REPL is the quickest and most convenient way to run code in Node. Simply type the following
command to open the REPL:
$ node
Welcome to Node.js v16.9.1
Type “.help” for more information
> console.log(‘Hello World’)
Hello World
undefined
Although the REPL allows you to run javascript, you should be aware that it is extremely limited.
The REPL is only used for minor orders or testing purposes.
You’ll need to create a file and run it if you want to construct a complete application in NodeJS.
In the app.js file, create and open the following line:
console.log(‘Hello World’)
NodeJS considers each file to be a module, allowing it to be executed.
To do so, type node in the terminal.
$ node app.js
Hello World
That’s it, you’ve just finished your very first NodeJS application!
You’ll need to use this command whenever you need to launch NodeJS code.
Javascript Design Patterns: Builder
W
hen building a process for an application, for example, when dealing with state
management or establishing routines to do a particular task, you may find yourself needing to use
an object in different parts of its lifecycle.
For example, when doing a database transaction, sometimes you may chain operations in an
instance of your DB connection class for different purposes: Update a record, add an event
listener, create a new record, perform a specific query, and so on. You may want to apply the
Builder design pattern in situations like this.
In an assembly line, each step has a responsibility!
From the design patterns book, we can take the following definition:
The Builder design pattern is a creational pattern where the creation of an object can have
different representations.
Just like in an assembly line, using the builder pattern our object can have different
representations through its creation. For example, in a car assembly line, we have first the chassis,
then a chassis + wheels, chassis + wheels + engine in an accumulating pattern. Using the builder
pattern we have methods within our object that can increment the object, thus having different
representations based on what methods have been applied to the object.
A car is assembled one part at a time!
In this article, we’ll see how to implement the builder design pattern in javascript to create a
simple Jquery copy.
Building a jquery “copy”
In this example, we’ll create “jQuestion” a class that mimics to a certain extent Jquery.
First, we’ll create the famous $ selector and append it to the window, so it behaves just like jquery.
It will be a function, that receives a query and returns an object of our jQuestion class. Also,
remember, use the global scope with caution.
Setup
After that, we may create our builder methods, keeping the following pattern for our builder:
•
Perform an operation;
•
Returns this;
The builder will do its operation, changing its current value, and then return the object's
instance, this. Doing things this way, it’s possible to implement the famous chaining of operations
that is characteristic of jquery, D3, and many other JS libraries.
innerText and attribute for Jquestion
And there we go, we have a builder implementation of Jquestion.
Code snippet
Nowadays you may ask yourself, why then do stuff with Jquery if the document representation
already provides an excellent interface to interact with the DOM? Because back when Jquery was
created, there was no single DOM API. If you wanted to do a selection, it would vary with the
Browser and the version of that browser. Imagine having to account for chrome, IE, Firefox, safari,
and version changes over time. True chaos.
So to solve that, Jquery uses a Builder pattern to do operations and a Facade for the operations,
abstracting DOM operations for each browser with a simple API. Marvelous design patterns
applied in real-world applications!
Wrap Up
As with all design patterns, and ways of structuring code, the builder pattern is situational. It
shines when dealing with objects where different representations matter. It also can make
debugging easier in many situations, because each step can be debugged individually.
However, when applied for example in a case where a set of steps is always applied to construct an
instance of the class, it just becomes an extra bureaucracy while coding. Design patterns are not
silver bullets, always think before applying a method, and evaluate how your application currently
works.
Node.js Basic Concept
What is Node.js
Node.js is a server environment. It’s not a programming language. Node.js is a single-threaded,
non-blocking open-source, and completely free asynchronous event-driven JavaScript runtime.
It’s a very powerful JavaScript-based platform built on Google Chrome’s V8 engine. That means
that the engine that compiles in JavaScript in a web browser.
How Node.js Works.
A common task for a web server can be to open a file on the server and return the content to the
client. Here is how Node.js handles a file request.
• Send
the tasks to the computer file system
• Ready
to handle the next request
• When
the file system opened and reads the file, the server returns the content to the client.
Node.js runs non-blocking, single-threaded, asynchronous programming, which is very fast And
memory efficient. Let’s see the example.
Why Node.js
Node.js uses JavaScript on the server. Node.js runs on various platforms, like windows, Mac OS,
Linux, etc. Node.js is a very powerful server-side platform to develop modern, reliable & scalable
applications, trusted by global companies such as Uber, Netflix, And LinkedIn. Node.js is
lightweight and super fast, especially for web applications.
This single-threaded event-driven architecture allows to handle multiple occurs connections
efficiently. The advantages of Using Node.js are. Let’s see the example.
1. Single Thread.
Node.js is single-threaded with an event loop Model, Inspired by JavaScript event-based Modell
with JavaScript callback mechanism. Node.js can deal with many events at the same time with its
single-thread program. And just one thread is able to input and output. These features make it
efficient, scalable, and lightweight by consuming low memory.
3. Non-blocking.
Non-blocking means, Your execution of code never stops in Node.js. The non-blocking method
receives input and returns the output asynchronously. And Non-blocking operations allow a single
process to serve multiple requests at the same time. That’s why Non-blocking is efficient and
effective to build scalable applications.
4. Asynchronous
Asynchronous (or async) execution refers to execution that doesn’t run in the sequence it appears
in the code. In async programming, the program doesn’t wait for the task to complete and can
move on to the next task.
5. Scalable
Scalability is the node.js core characteristic. Scalable applications are not just dependent on
systems but rather the applications architecture used. It is scalable due to load balancing.
Essentially you can have multiple tasks for the Node to process and it can handle it with no
significant burden. This makes it scalable.
6. Event-Driven
Event-driven programming is a programming paradigm in which the flow of the program is
determined by events. This means that the control flow of these server-side platforms is driven by
the occurrence of events. And Node.js is an event-driven technology. When the node.js application
starts, an event listener called Event loop begins to wait for events and doesn’t stop until the
application is shut down.
7. No-Buffering
Node.js is applications never buffer data. Users can watch videos without any interruption.
8. Node Package Manager (NPM)
The world’s largest free and open-source library of functionalities, and can be easily imported and
used in any node application. Let’s simple it if you need any tools in the application, It will be
found at Node Package manager. And npm is the default package manager for Node.js and it gets
installed into the system when Node.js is installed. NPM is the short form of Node Package
Manager.
Node.js can do…
Node.js can generate the dynamic page content. Nd Node.js can create, read, write, open, delete
and close files on the server. Node.js can collect form data. It also can add delete, update data in
your Database. Node.js can enable high-volume web streaming. Node.js file extension is “ .js ”. The
top usages of Node.js are.
• Mean
Stack Development
• Collecting
Data
• Streaming
• Real-Time
Applications
• Chat
Rooms
• First
And Scalable Applications
• Processing
• Browser
• Queued
Games
Input.
Node.js Architectures.
1. Event Queue
As soon as these requests reach the application, they go to the event queue. And a queue where all
the events that occur in the application go first, and where they await to send to be processed in
the main thread called Event Loop.
2. Event Loop
The event loop is the fundamental concept of Node.js. It outlines the mechanisms that make node
a successful, efficient & powerful popular framework. The event loop allows Node.js to perform
non-blocking I/O operations despite the fact that JavaScript is single-threaded. It is done by
assigning operations to the operating system whenever and wherever possible. And Node.js uses
an observer pattern.
3. Thread pool
Multi-thread platform that runs a library called libuv and has C++ in its core, the request
(Blocking Operation) is processed asynchronously in the background until it’s completed and
ready to be returned. Worker thread pools in Node.js are a group of running worker threads that
are available to be used for an incoming task. When a new task comes in, it can be passed to an
available worker. Once the worker completes the task, it can pass the result back to the parent and
that particular worker is again available to accept new tasks. Let’s See the example.
6
6
Latest Happenings in JavaScript
New Developments in UI with Java Script
In the world of UI development most of the things now look good and settled with arrival of
libraries like React, Vue, Angular, Next.js etcetera . Developers have moved from Imperative way
of programming to Declarative way of programming. Imperative programming is like giving a
chef step-by-step instructions on how to make a pizza. Declarative programming is like
ordering a pizza without being concerned about the steps it takes to make the pizza. Most of the
latest libraries or Frameworks that the devs work now are the declarative ones.
Well, I think that’s true..!
But, as we know the universe is ever expanding. The same goes for the world of UI development.
So, here we are to understand all the latest happenings and developments in the world of
JavaScript which in turn changes notions and the subsequent things in the UI development.
“Change is inevitable, Change is a force that always pushes things forward.
Whether you’re comfortable with it or not, things will change, and preparing yourself for
that change helps you to be better equipped for the future”.
1. Fresh and Deno
It’s dripping Fresh.
Fresh is a full stack web framework for “Deno” that allows developers to build fast serverrendered web apps. It has some awesome features, it ships zero java script to the browser by
default. It uses the “island architecture” (which will be explained later below as you read) to limit
JavaScript to specific components and can be deployed to the edge. It has no build step required
during development. It is a full stack framework for server side rendering like Ruby on Rails or
Laravel, but you write your apps in TypeScript because it is built on top of Deno. Deno is a run
time alternative for Node.js . Deno was created by Ryan Dahl (Also, the creator of Node.js). The
benefit of using Deno for a web framework is that you get first class typescript support out of the
box. And you can build and deploy your app without the build step. This has resulted in increasing
the build time by 10 percent.
Thank you, Ryan Dahl. Hope you rectified your mistakes in Node.js
Fresh can be deployed to Edge browser instantly. The simplicity with Deno is really awesome. One
drawback here is though that Deno has a much smaller Eco system when compared to Nodejs and
not all node packages are compatible.
Island Architecture..?
This is also known as Partial Hydration. The idea is that you built a website using a JavaScript
framework. In this case, Fresh uses Preact for the UI but instead of sending JavaScript code to the
browser you render everything to static HTML on the server. That means by default the only thing
the end user gets a static HTML page which is much faster for the browser to load and render. In
many cases though, the website will need more interactivity than just static HTML and that’s
where islands come in.
When a website needs more interactivity than static HTML, it can opt into JavaScript on
individual components. The way it works in Fresh is any component that is put in islands directory
will ship JavaScript to the browser while other components will be rendered as static HTML. It is
also based on web standards like the fetch API. It uses remix style form submissions when
submitting a regular HTML form you can write TypeScript code that will handle that form
submission directly in the component file for that page. Please visit the documentation section of
Fresh to understand the concepts and learn Fresh. If you are well versed with Next.js or React, you
are almost there and will understand it easily.
2.Bun
If you are a Dev who always love do to something new and different. Then here is another
JavaScript which does not need Node or Deno. Bun is a new JavaScript runtime with a native
bundler, transpiler, task runner and npm client built-in. It claims that its significantly faster than
node or deno.
That’s very Fast..!!
JavaScript Runtime…?
In short, JavaScript runtime refers to where your JavaScript code is executed when you
run it. The most popular JavaScript engine is Google's v8 which powers chrome, Node and Deno.
and makes JavaScript execution extremely fast with JIT (Just In Time) compilation. And how Bun
performs way bettern than the rest mentioned above is , well it does not use the v8 engine but
instead uses JavaScript core from webkit which is generally considered to be faster but more
difficult to work with. In addition it is written in a low level programming language
called ZIG. ZIG is relatively new language that is similar to C or Rust. The creators of Bun say the
lack of hidden control flow makes it much simpler to write fast software. It’s fun to go fast but
more importantly bun is an all in one runtime. It has a native bundler to replace tools like webpack
and also has a native transpiler. So you can write TypeScript code out of the box. With cool
features like top level await an idea that’s already been pioneered Deno but bun will also transpile
your JSX files. Also like Deno it prioritizes web APIs like fetch while at the same time also
supporting many node core modules as well as node API which will allow many npm packages to
also work in bun. In fact, it implements node’s module resolution algorithm which means you can
install packages from npm into bun and those packages install 20 times faster. It feels like magic.
Another nice ergonomic feature is that environment variables load automatically. It’s not like
node, where you have to install environment into every project. Bun also comes with its own test
runner similar to jess and as you might imagine, it’s fast.
Conclusion
So things are starting to shape out to be pretty awesome for the future and all these enhanced
features and dev friendly things might attract and entice us to start a new project on these, but
keep in mind these libraries are still in its infancy, there can be bugs and issues while working on
these libraries. But, what can be the best output than trying to learn these libraries and as you
implement them learn from the mistakes, bugs and issues. This will pave a better way for your
confidence on these libraries or any new ones coming our way as we move ahead.
Dockerize and deploy Node.js applications using GitHub
Actions and Packages
Prerequisites
•
Installed Node.js and npm
•
Installed and configured Docker
•
Existing repository on Github
•
Configured droplet on DigitalOcean
What are Github Actions?
•
GitHub Actions is an API for cause and effect on GitHub: orchestrate any workflow, based on
any event, while GitHub manages the execution, provides rich feedback, and secures every step
along the way.
What is Docker?
•
Docker is an open platform for developing, shipping, and running applications. Docker
enables you to separate your applications from your infrastructure so you can deliver software
quickly. With Docker, you can manage your infrastructure in the same ways you manage your
applications.
What is DigitalOcean Droplet?
•
DigitalOcean Droplets are Linux-based virtual machines (VMs) that run on top of
virtualized hardware. Each Droplet you create is a new server you can use, either standalone or
as part of a larger, cloud-based infrastructure.
In this tutorial, we’ll use Github Packages as a container registry for our docker image.
Note: GitHub Container Registry is currently in public beta and subject to change. During
the beta, storage and bandwidth are free. To use GitHub Container Registry, you must
enable the feature preview. For more information, see “About GitHub Container Registry”
and “Enabling improved container support.”
Prepare our Node.js application
Install dependencies
First of all, we need to create a package.json file.
The package.json file defines the dependencies that should be installed with your application. To
create a package.json file for your app, run the command npm init in the root directory of your
app. It will walk you through creating a package.json file. You can skip any of the prompts by
leaving them blank.
$ cd
pathtoyourrepo
$ npm init
name: (nodejs-deploy)
version: (1.0.0)
description: Node.js on DO using Docker and Github Actions
entry point: (server.js)
test command:
git repository:
keywords:
author: First Last <first.last@example.com>
license: (ISC) MIT
The generated package.json file looks like this:
{
"name": "nodejs-deploy",
"version": "1.0.0",
"description": "Node.js on DO using Docker and Github Actions",
"author": "First Last <first.last@example.com>",
"main": "server.js",
"scripts": {
"start": "node server.js"
}
}
To install dependencies, use npm install <pkg>. It installs the package and also adds it as a
dependency in the package.json file. For example, to install express, you would type npm install
express.
$ npm install express
Now your package.json file should look like this:
{
"name": "nodejs-deploy",
"version": "1.0.0",
"description": "Node.js on DO using Docker and Github Actions",
"author": "First Last <first.last@example.com>",
"main": "server.js",
"scripts": {
"start": "node server.js"
}
}
Create server.js file
As you may see, we declared that our entry point is a server.js file. Let's create one.
This file would contain an express application with one simple GET endpoint, which would allow us
to test deployment.
First of all, let’s import express and declare our endpoint:
'use
strict';
const express = require('express');
const PORT = 8080;
const HOST = '0.0.0.0';
const app = express();
app.get('/', (_, res) => {
res.send({
message: "It's on DigitalOcean!",
});
});
const server = app.listen(PORT, HOST, () => {
console.log(`Running on http://${HOST}:${PORT}`);
});
$ npm start
Running on http://0.0.0.0:8080
Now let’s check if our application is listening for requests by accessing http://localhost:8080. I’ll
use Postman for this.
And it works!
Now we can proceed to dockerizing our application.
Dockerize Node.js app
To achieve our goal, first of all, we need to create the Dockerfile. According to the documentation,
a Dockerfile is a text document that contains all the commands a user could call on the command
line to assemble an image.
Simple Dockerfile
FROM
node
# Create app directory
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN npm install
CMD "npm" "start"
You can already build and run your container and it will work, but maybe we can do it better? Of
course!
Let’s specify the version of the base image:
FROM node:14-alpine
Then let’s take a look at the dependency installation. We’re preparing a production build of the
application, so we don’t need dev dependencies to be installed. We can fix it by changing RUN npm
install to:
# Install
only
production
dependencies
from lock
file
RUN npm ci --only=production
Another step is to ensure that all frameworks and libraries are using optimal configuration for
production. We can do it by adding this line to our Dockerfile:
# Optimise for production
ENV NODE_ENV production
Don’t run containers as root
It’s really important to keep your process without security risks!
Friends don’t let friends run containers as root!
So, let’s change few more lines in our Dockerfile:
# Copy app
files with
permissions
for node
user
COPY --chown=node:node . /usr/src/app
# friends don’t let friends run containers as root!
USER node
Our application is listening on port 8080, so we need to expose this port from the container:
EXPOSE 8080
At this point our Dockerfile looks like this:
FROM
node:14alpine
# Optimise for production
ENV NODE_ENV production
# Create app directory
WORKDIR /usr/src/app
# Copy app files
COPY --chown=node:node . /usr/src/app
# Install only production dependencies
RUN npm ci --only=production
# friends don’t let friends run containers as root!
USER node
# Make port 8080 accessible outside of the container
EXPOSE 8080
CMD "npm" "start"
Let’s build and run our image:
$
docker
build .
-t
nodejsdeploy
$ docker run -d -p 8080:8080 --name=nodejs-deploy nodejs-deploy:latest
You can check if it’s running by typing the command:
$ docker ps
And you can see the container’s logs with the following command:
$ docker logs nodejs-deploy
Graceful Shutdown
Node.js has integrated web server capabilities. Plus, with Express, these can be extended even
more.
Unfortunately, Node.js does not handle shutting itself down very nicely out of the box. This causes
many issues with containerized systems.
When a Node.js application receives an interrupt signal, also known as SIGINT, or CTRL+C, it will
cause an abrupt process kill, unless any event handlers were set, of course, to handle it in different
behavior. This means that connected clients to a web application will be immediately
disconnected.
Let’s simulate this problem by creating another endpoint with delayed response:
app.get('/delayed', async (_, res) => {
const SECONDS_DELAY = 60000;
await new Promise((resolve) => {
setTimeout(() => resolve(), SECONDS_DELAY);
});
res.send({ message: 'delayed response' });
});
Run this application and once it’s running send a simple HTTP request to this endpoint.
Hit CTRL+C in the running Node.js console window and you'll see that the curl request exited
abruptly. This simulates the same experience your users would receive when containers tear down.
Part 1
To fix this we need to allow requests to be finished. Let’s explain it to our Node.js server:
//
Graceful
shutdown
function closeGracefully(signal) {
console.log(`Received signal to terminate: ${signal}`);
server.close(() => {
// await db.close() if we have a db connection in this app
// await other things we should cleanup nicely
console.log('Http server closed.');
process.exit(0);
});
}
process.on('SIGINT', closeGracefully);
process.on('SIGTERM', closeGracefully);
This calls server.close(), which will instruct the Node.js HTTP server to:
•
Not accept any more requests.
•
Finish all running requests.
It will do this on SIGINT (when you press CTRL+C) or on SIGTERM (the standard signal for a process to
terminate).
You may have a question “What if a request is taking too much time?”. So if the container is not
stopped, Docker and Kubernetes will run a SIGKILL after a couple of seconds (usually 30) which
cannot be handled by the process itself, so this is not a concern for us.
Part 2
Now in our Dockerfile, we're starting our application with the command npm
there is a big problem with this:
start.
Unfortunately,
If yarn or npm get a SIGINT or SIGTERM signal, they correctly forward the signal to spawned child
process (in this case node
server.js).
However, it does not wait for the child's processes to stop.
Instead, yarn/ npm immediately stop themselves.
The solution is not to run the application using npm and instead use node directly:
CMD ["node", "server.js"]
But there still is a problem. Docker is running our process as PID 1. According to Node.js Docker
Workgroup Recommendations:
Node.js was not designed to run as PID 1 which leads to unexpected behavior when running
inside of Docker. For example, a Node.js process running as PID 1 will not respond
to SIGINT ( CTRL-C) and similar signals.
We can use a tool called dumb-init to fix it. It'll be invoked as PID
process as another process. Let's add to our Dockerfile:
# Add
tool
which
will
fix
1
and then will spawn our node.js
init
process
RUN apk add dumb-init
...
CMD ["dumb-init", "node", "server.js" ]
So the final version of our Dockerfile looks like this:
FROM
node:14alpine
# Add tool which will fix init process
RUN apk add dumb-init
# Optimise for production
ENV NODE_ENV production
# Create app directory
WORKDIR /usr/src/app
# Copy app files
COPY --chown=node:node . /usr/src/app
# Install only production dependencies
RUN npm ci --only=production
# friends don’t let friends run containers as root!
USER node
# Make port 8080 accessible outside of container
EXPOSE 8080
CMD ["dumb-init", "node", "server.js" ]
And now we can proceed to our Github Actions!
Configure the Github Actions
Introduction
Go to your repository, select the Actions tab. You will see that GitHub is proposing you different
workflows, but it's not our approach. Click on set
up a workflow yourself.
We’ll be redirected to the page with the initial config, it’ll be committed to the main ( master) when
we'll finish our configuration.
Let’s talk a little about the initial config, it should look like this:
# This
is a
basic
workflow
to help
you get
started
with
Actions
name: CI
# Controls when the action will run.
on:
# Triggers the workflow on push or pull request events but only for the master branch
push:
branches: [master]
pull_request:
branches: [master]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout@v2
# Runs a single command using the runner's shell
- name: Run a one-line script
run: echo Hello, world!
# Runs a set of commands using the runner's shell
- name: Run a multi-line script
run: |
echo Add other actions to build,
echo test, and deploy your project.
•
name
- is the name of our workflow
•
- is the block where we describe what will trigger our workflow. By default, it's triggered
when a push is performed to the master branch (in this case master branch is accessed) or when
a Pull Request is performed into the master branch (in this case will be accessed source branch,
ex. feature/TASK-1). And we can trigger it manually, it's allowed by
the workflow_dispatch property.
•
jobs
•
runs-on
•
steps
•
uses
•
name
•
run
on
- is the block in which our jobs are configured. They can run one by one, or simultaneously
(ex. deploying backend and frontend at once in mono repo).
- The type of machine to run the job on. The machine can be either a GitHub-hosted
runner or a self-hosted runner.
- the place where our logic lives. Each step runs in its process in the runner environment
and has access to the workspace and filesystem.
- selects an action to run as part of a step in your job. An action is a reusable unit of code.
In this case, is called predefined by GitHub action actions/checkout@v2 which allow us
to checkout the source branch (master or another one that triggered the workflow)
- is the name of the step. It'll be shown in the progress of workflow execution.
- runs command-line programs using the operating system's shell. If you do not provide
a name, the step name will default to the text specified in the run command. It can execute a oneline command or multiline commands as well.
More detailed documentation you can find by accessing Workflow Syntax Documentation
Build and push
Now we have enough knowledge to start working on our configuration. Let’s define the name of
our workflow and when it’ll be triggered. In our case workflow should be executed only on changes
in the master branch or manually, so our declarations will look like this:
name:
Build,
Push
and
Deploy
Node.js
app
# Controls when the action will run.
on:
# Triggers the workflow on push events but only for the master branch
push:
branches: [master]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
Now we need to declare some env variables to be able to reuse them in our configuration to avoid
repeating the same things:
env: REGISTRY: docker.pkg.github.com # we will push our docker-image to the GitHub packages REPO: tfarras/nodejsdeploy/nodejs-image # is the name of our image which will be used to push or pull it CONTAINER: nodejs-image # name of the
container which will be used to stop or start the container
It’s time to define our jobs. In our case there will be two jobs, one will build and push the image to
the registry and another to pull and run the container on our droplet.
To build and push the container to the registry we’ll use the docker/build-push-action@v1 action,
you can find detailed documentation here.
jobs:
push_to_registry: # name of our first job
name: Push Docker image to GitHub Packages # User-friendly name which is displayed in the process of execution
runs-on: ubuntu-latest # this job should be run on the ubuntu-latest runner
steps:
- name: Check out the repo # name of the first step, it'll `checkout` the latest commit in the master branch
uses: actions/checkout@v2
- name: Push to GitHub Packages # name of the second step
uses: docker/build-push-action@v1 # declare that we're going to use this action
with: # block which receives configuration for the used action
username: ${{ github.actor }} # github username
password: ${{ secrets.GITHUB_TOKEN }} # github password or github access token
registry: ${{ env.REGISTRY }} # our REGISTRY env variable declared in the section above
repository: ${{ env.REPO }} # our REPO env variable
tag_with_ref: true # Automatically tags the built image with the git reference. (from the doc)
At this point our workflow config should look like this:
name:
Build,
Push
and
Deploy
Node.js
app
# Controls when the action will run.
on:
# Triggers the workflow on push events but only for the master branch
push:
branches: [master]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
env:
REGISTRY: docker.pkg.github.com
REPO: tfarras/nodejs-deploy/nodejs-image
CONTAINER: nodejs-image
jobs:
push_to_registry:
name: Push Docker image to GitHub Packages
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v2
- name: Push to GitHub Packages
uses: docker/build-push-action@v1
with:
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
registry: ${{ env.REGISTRY }}
repository: ${{ env.REPO }}
tag_with_ref: true
As you can see we’re using github.actor and secrets.GITHUB_TOKEN, and you most probably have
questions, where we declared these variables. Answer: we don't.
These variables are predefined by GitHub.
•
- is the login of the user that initiated the workflow run and takes part in
the github context. You can read more about it here
github.actor
•
- is a token provided by GitHub. It's created on each workflow run. You
can use the GITHUB_TOKEN to authenticate in a workflow run. Learn more here.
secrets.GITHUB_TOKEN
This action can already be used if you want just to build and push your container. It’s suitable if
you just working on a docker-image and it should be only stored in the registry, and you need to be
able to pull it when you need it.
But in our case we need also to deploy it, so let’s configure our second job.
Deploy: Pull and run
Our second job has the responsibility to connect to our droplet via ssh, pull the container and run
the docker container. It’ll also run on ubuntu-latest runner and it should start only after our
previous job called push_to_registry. So, our job declaration will look like this:
deploy:
# name
of the
second
job
needs: [push_to_registry] # specify that it's dependent on the push_to_registry job
name: Deploy to DigitalOcean # user-friendly name of the job
runs-on: ubuntu-latest # specify runner
Before steps configuration, we need to add some more variables, namely SSH_KEY, SSH_USER,
and SSH_HOST. These variables will be used to authenticate our ssh connection to the droplet. But
like other secrets of our application, it's a very bad idea to store them in the repository files, so we
need another, more secure, way to declare them. And GitHub provides one - it's called Secrets and
you can find them in the Settings tab of your repository in GitHub.
Secrets can be of two types: Repository and Environment secrets. You can learn more about them
in the documentation. In our case, we’ll use Repository secrets, so go to the configuration page and
click on the New
repository secret.
As specified before, we need three variables:
- it's your private key used to access the droplet.
•
SSH_KEY
•
SSH_USER
- username used to access the droplet via ssh
•
SSH_HOST
- host of your droplet
Once they’re set, you’ll see the following result. These secrets cannot be seen again even by the
repository owner, they can be only updated or removed.
Now we can continue with our steps configuration. To perform SSH connection we'll
use webfactory/ssh-agent action. More details and description you can find here.
Let’s configure the SSH connection:
steps:
- name: Setup SSH connection # name of this step
uses: webfactory/ssh-agent@v0.5.1 # action which is used
with:
ssh-private-key: ${{ secrets.SSH_KEY }} # provide private key which we added before
According to documentation, these actions will not update the .known-hosts file for us, so let's
declare another step which will update this file using ssh-keyscan.
name: Adding Known Hosts run: ssh-keyscan -H ${{ secrets.SSH_HOST }} >> ~/.ssh/known_hosts # run shell command which will
scan and add hosts
Now it’s time to add a command which will pull our image to the droplet:
- name: Pull latest container run: | ssh ${{secrets.SSH_USER}}@${{secrets.SSH_HOST}} "docker pull
${{env.REGISTRY}}/${{env.REPO}}:latest"
In this command, we specified that we need to connect via ssh using our user and host and run the
command to pull the latest version of our docker image.
Now we need to run our container:
- name: Start docker container run: | ssh ${{secrets.SSH_USER}}@${{secrets.SSH_HOST}} "docker run -p 8080:8080 -d --restart
unless-stopped --name=${{env.CONTAINER}} ${{env.REGISTRY}}/${{env.REPO}}:latest"
In this step, we also connect via ssh but let's take a closer look at the docker command
- runs the container itself
•
docker run
•
-p 8080:8080
•
-d
•
--restart unless-stopped
•
--name=${{env.CONTAINER}}
•
${{env.REGISTRY}}/${{env.REPO}}:latest
- specifies that we need to bind exposed from the container port (8080) with the
local port of the machine(droplet).
- flag is used to run the container in detached mode
- specifies that the container should be restarted unless it's stopped
manually. It also will start on the machine startup.
- specifies the name under which container will be started
- specifies which image we need to run as a container
At this point our configuration will look like this:
name:
Build,
Push
and
Deploy
Node.js
app
# Controls when the action will run.
on:
# Triggers the workflow on push events but only for the master branch
push:
branches: [master]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
env:
REGISTRY: docker.pkg.github.com
REPO: tfarras/nodejs-deploy/nodejs-image
CONTAINER: nodejs-image
jobs:
push_to_registry:
name: Push Docker image to GitHub Packages
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v2
- name: Push to GitHub Packages
uses: docker/build-push-action@v1
with:
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
registry: ${{ env.REGISTRY }}
repository: ${{ env.REPO }}
tag_with_ref: true
deploy:
needs: [push_to_registry]
name: Deploy to DigitalOcean
runs-on: ubuntu-latest
steps:
- name: Setup SSH connection
uses: webfactory/ssh-agent@v0.5.1
with:
ssh-private-key: ${{ secrets.SSH_KEY }}
- name: Adding Known Hosts
run: ssh-keyscan -H ${{ secrets.SSH_HOST }} >> ~/.ssh/known_hosts
- name: Pull latest container
run: |
ssh ${{secrets.SSH_USER}}@${{secrets.SSH_HOST}} "docker pull ${{env.REGISTRY}}/${{env.REPO}}:latest"
- name: Start docker container
run: |
ssh ${{secrets.SSH_USER}}@${{secrets.SSH_HOST}} "docker run -p 8080:8080 -d --name=${{env.CONTAINER}}
${{env.REGISTRY}}/${{env.REPO}}:latest"
Look pretty good now, isn’t it? But it has some issues which will fail our workflow if we’ll run it
now.
To pull containers from the GitHub container registry we need to authenticate to this one. We’ll do
it also using github.actor and secrets.GITHUB_TOKEN variables. So let's add one more step before
container pulling:
- name: Login to the GitHub Packages Docker Registry run: ssh ${{secrets.SSH_USER}}@${{secrets.SSH_HOST}} "docker login
${{env.REGISTRY}} -u ${{github.actor}} -p ${{secrets.GITHUB_TOKEN}}"
But for security reasons, it’s not a good idea to leave docker authenticated to a registry on the
remote machine, so we need to add at the end of our workflow to logout from the registry:
- name:
Logout
from the
GitHub
Packages
Docker
Registry
run: ssh ${{secrets.SSH_USER}}@${{secrets.SSH_HOST}} "docker logout ${{env.REGISTRY}}"
With these steps, we solved the authentication issue, but there is one more. On the second run, our
workflow will fail.
Why? The reason is simple because the port and name of our container are already used from the
previous run.
How to fix it? The fix is pretty simple, we just need to stop and remove the previous container.
Let’s add two more steps just before starting our container:
- name: Stop deployed container
continue-on-error: true
run: |
ssh ${{secrets.SSH_USER}}@${{secrets.SSH_HOST}} "docker stop ${{env.CONTAINER}}"
- name: Remove deployed container
continue-on-error: true
run: |
ssh ${{secrets.SSH_USER}}@${{secrets.SSH_HOST}} "docker rm ${{env.CONTAINER}}"
You probably have a question:” Why do we need continue-on-error property here?". The reason is
that these commands will throw an error if there isn't any running or existing container with the
name of our container. It's not a problem for our workflow, so we'll just skip these errors.
The final version of our workflow configuration will look like this:
e:
Build,
Push
and
Deploy
Node.js
app
# Controls when the action will run.
on:
# Triggers the workflow on push events but only for the master branch
push:
branches: [master]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
env:
REGISTRY: docker.pkg.github.com
REPO: tfarras/nodejs-deploy/nodejs-image
CONTAINER: nodejs-image
jobs:
push_to_registry:
name: Push Docker image to GitHub Packages
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v2
- name: Push to GitHub Packages
uses: docker/build-push-action@v1
with:
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
registry: ${{ env.REGISTRY }}
repository: ${{ env.REPO }}
tag_with_ref: true
deploy:
needs: [push_to_registry]
name: Deploy to DigitalOcean
runs-on: ubuntu-latest
steps:
- name: Setup SSH connection
uses: webfactory/ssh-agent@v0.5.1
with:
ssh-private-key: ${{ secrets.SSH_KEY }}
- name: Adding Known Hosts
run: ssh-keyscan -H ${{ secrets.SSH_HOST }} >> ~/.ssh/known_hosts
- name: Login to the GitHub Packages Docker Registry
run: ssh ${{secrets.SSH_USER}}@${{secrets.SSH_HOST}} "docker login ${{env.REGISTRY}} -u ${{github.actor}} -p
${{secrets.GITHUB_TOKEN}}"
- name: Pull latest container
run: |
ssh ${{secrets.SSH_USER}}@${{secrets.SSH_HOST}} "docker pull ${{env.REGISTRY}}/${{env.REPO}}:latest"
- name: Stop deployed container
continue-on-error: true
run: |
ssh ${{secrets.SSH_USER}}@${{secrets.SSH_HOST}} "docker stop ${{env.CONTAINER}}"
- name: Remove deployed container
continue-on-error: true
run: |
ssh ${{secrets.SSH_USER}}@${{secrets.SSH_HOST}} "docker rm ${{env.CONTAINER}}"
- name: Start docker container
run: |
ssh ${{secrets.SSH_USER}}@${{secrets.SSH_HOST}} "docker run -p 8080:8080 -d --name=${{env.CONTAINER}}
${{env.REGISTRY}}/${{env.REPO}}:latest"
- name: Logout from the GitHub Packages Docker Registry
run: ssh ${{secrets.SSH_USER}}@${{secrets.SSH_HOST}} "docker logout ${{env.REGISTRY}}"
Now we can commit and push your workflow to run into the master branch!
Workflow should be triggered automatically since we performed a push action to the master branch.
If you did everything right, you will not get any error in the execution:
And now it’s time to check our deployed application works on the remote server. Let’s run a query
to your host:8080 or a domain if it's configured on your machine:
As you can see everything works great!
Conclusion
In this tutorial, we created a Node.js and dockerized it according to best practices and then
deployed it using GitHub Actions, GitHub Packages, and DigitalOcean droplet.
Note: GitHub
Packages
can be substituted by another container registry according to the
action documentation, and instead of DigitalOcean can be used another VPS. You're free to
customize this configuration according to your needs.
Uploading a Node.js app on Cpanel using Namecheap
In this article, I will explain how to deploy a Node.js Application to Cpanel using Namecheap. This
article will cover how to organise your app, create a node.js App within Cpanel and the potential
changes to your code that you will need to do.
File Structure
In order to create a Node.js Application, you need to place all of the code onto Cpanel’s file system.
Creating a good file structure is a good way to organise all of your applications.
Personally, I create a folder called “nodejs” and store all my node.js apps in subfolders within that
folder. For this example, we will name our project “nodeA
pp”, inside that folder will put all the source code (not node_modules folder, we will do this
later). Below is how this should look!
File Structure for node.js apps
As you can see above we also have another folder that is inside of the public_html directory. For
this example, we are using the main domain folder. However, if you are using a different domain
or a subdomain place the “nodeAppApi” folder within that domains directory. This will then use
that domain! We have created a “nodeAppApi” folder to store our htaccess file. We do this to
separate our node Api from the React Application code.
Inside the .htaccess write the following code:
RewriteEngine off
This will stop the Apache server from rewriting or redirecting any of the requests that go to the
node app.
Great! We have set up the file structure for our node app. Next we will create the node app through
Cpanel.
Creating a Node.js App in Cpanel
Navigate your way to the “Node.js” section of Cpanel. You can do it from the “Main Dashboard” >
“Setup Node.js App” Button. Press that button and it will send you straight there. Next press
“Create Application”!
The next page is going to look something like this
As you can see there are a few things we need to fill in or change in order to create the application.
Below explains what each of these sections do!
Node.js Version — Set the version of Node.js that your application requires.
Application Mode — Select either “Development” or “Production” (Recommended to select
“Production”)
Application Root — This is where your app is located in the file system. In this example its
“nodejs/nodeApp”
Application URL — This is the domain that the app will use. For this example it is
“example.com/nodeAppApi” this will then use the .htaccess that we set up.
Application Start file — This is the file name of your app (e.g. server.js | app.js | index.js)
Once you have filled out all of the information press “Create”. Once you have done that you will see
something that looks like this
At the bottom you can see a button saying “Run NPM Install”. Press this and it will create that
node_modules folder for us. (You need to have a package.json inside the application root to do
this.)
The final thing before we are done is to change our ports. This took me a while to figure out as I
kept getting EACCESS errors saying that I didn’t have permission to use those ports. So in order to
fix this we don’t set a port but instead we use this…
process.env.PORT
Why do we do this? This is because port handling is handled further upstream by the Apache
Severs. Meaning we don’t need to define a port, it will automatically sort this out for us.
Testing Your App
Congratulations, you have successfully uploaded your app to a live server. However, to make sure
that our app is working correctly you can use postman to test and make sure it is working. Once
you are sure it’s all working correctly you are all done!
I hope this was helpful for you!
Uploading a Node.js app on Cpanel using Namecheap
In this article, I will explain how to deploy a Node.js Application to Cpanel using Namecheap. This
article will cover how to organise your app, create a node.js App within Cpanel and the potential
changes to your code that you will need to do.
File Structure
In order to create a Node.js Application, you need to place all of the code onto Cpanel’s file system.
Creating a good file structure is a good way to organise all of your applications.
Personally, I create a folder called “nodejs” and store all my node.js apps in subfolders within that
folder. For this example, we will name our project “nodeA
pp”, inside that folder will put all the source code (not node_modules folder, we will do this
later). Below is how this should look!
File Structure for node.js apps
As you can see above we also have another folder that is inside of the public_html directory. For
this example, we are using the main domain folder. However, if you are using a different domain
or a subdomain place the “nodeAppApi” folder within that domains directory. This will then use
that domain! We have created a “nodeAppApi” folder to store our htaccess file. We do this to
separate our node Api from the React Application code.
Inside the .htaccess write the following code:
RewriteEngine off
This will stop the Apache server from rewriting or redirecting any of the requests that go to the
node app.
Great! We have set up the file structure for our node app. Next we will create the node app through
Cpanel.
Creating a Node.js App in Cpanel
Navigate your way to the “Node.js” section of Cpanel. You can do it from the “Main Dashboard” >
“Setup Node.js App” Button. Press that button and it will send you straight there. Next press
“Create Application”!
The next page is going to look something like this
As you can see there are a few things we need to fill in or change in order to create the application.
Below explains what each of these sections do!
Node.js Version — Set the version of Node.js that your application requires.
Application Mode — Select either “Development” or “Production” (Recommended to select
“Production”)
Application Root — This is where your app is located in the file system. In this example its
“nodejs/nodeApp”
Application URL — This is the domain that the app will use. For this example it is
“example.com/nodeAppApi” this will then use the .htaccess that we set up.
Application Start file — This is the file name of your app (e.g. server.js | app.js | index.js)
Once you have filled out all of the information press “Create”. Once you have done that you will see
something that looks like this
At the bottom you can see a button saying “Run NPM Install”. Press this and it will create that
node_modules folder for us. (You need to have a package.json inside the application root to do
this.)
The final thing before we are done is to change our ports. This took me a while to figure out as I
kept getting EACCESS errors saying that I didn’t have permission to use those ports. So in order to
fix this we don’t set a port but instead we use this…
process.env.PORT
Why do we do this? This is because port handling is handled further upstream by the Apache
Severs. Meaning we don’t need to define a port, it will automatically sort this out for us.
Testing Your App
Congratulations, you have successfully uploaded your app to a live server. However, to make sure
that our app is working correctly you can use postman to test and make sure it is working. Once
you are sure it’s all working correctly you are all done!
I hope this was helpful for you!
Download