Uploaded by tiniumoh

Nodejs 2207207

advertisement
Why do we need Streams in Node.js?
Photo by Snejina Nikolova on Unsplash
Streams are sequences of data made available over time. The difference with other types of data
like strings or arrays is that streams might not be available all at once, and they don’t have to fit in
memory.
Many of the built-in modules in Node implement the streaming interface, such as HTTP response,
HTTP request, fs, process.stdin, etc.
Let’s see how Stream solves our slow/blocking web server problem.
Assume we need to serve a big file using a Node web server.
mkdir blog-why-node-streams
cd blog-why-node-streams
echo "" > index.js
Update index.js with the below code,
const fs = require("fs");
const http = require("http");// code to generate random big sized file on fly
fs.stat("big.file", function (err, stat) {
if (err == null) {
console.log("File exists");
} else if (err.code === "ENOENT") {
const file = fs.createWriteStream("./big.file");
for (let i = 0; i <= 1e6; i++) {
file.write(
`Lorem ipsum dolor sit amet, consectetur adipiscing elit. Curabitur tempus id metus a
sodales. Maecenas faucibus bibendum mauris elementum ultrices. In hac habitasse platea
dictumst. Pellentesque consequat augue nec urna interdum, a sagittis arcu ornare. Duis
pulvinar odio vitae velit euismod, nec pretium nisi tempus. Lorem ipsum dolor sit amet,
consectetur adipiscing elit. Cras ante lorem, suscipit non lobortis venenatis, interdum a
dui. Donec rhoncus magna lectus, ut vestibulum eros rutrum gravida. Aenean sit amet fringilla
erat. In varius fermentum justo, in maximus sapien tempus non. Sed malesuada tempor erat eget
tristique. Pellentesque diam nulla, pharetra sed luctus nec, euismod non tortor.`
);
}
console.log("big.file created");
file.end();
} else {
console.log("Some other error: ", err.code);
}
});const server = http.createServer();server.on("request", (req, res) => {
fs.readFile("./big.file", (err, data) => {
if (err) throw err;
res.end(data);
});
});server.listen(8000, () => console.log("The server is running at localhost:8000"));
The above code does two things:
1. The first part of the code is to generate a huge (~600 MB) file. A utility code.
2. The second part is a simple web server endpoint serving the big.file file.
Let’s run the server.
> node index.jsThe server is running at localhost:8000
big.file created
After starting the node server, let’s see the memory usage using the Windows task manager. We
have ~5.8 MB of memory consumed by our server.
Node server memory consumption before serving the file
Now let’s curl the endpoint to download the file.
> curl localhost:8000Lorem ipsum dolor sit amet, consectetur adipiscing e......
............................
.......................
Now, look at the memory consumption for the server using task manager.
Node server memory consumption while serving the file
When we run the server, it starts out with a normal amount of memory, ~5.8 MB. Then we
connected to the server. Note what happened to the memory consumed. The memory
consumption jumped to ~684 MB.
How does it work?
We basically put the whole big.file content in memory before we wrote it out to the response object.
This is very inefficient.
Solution,
The HTTP response object (res in the code above) is also a writable stream. This means that if we
have a readable stream that represents the content of big.file, we can simply pipe those two
together and get nearly the same result without consuming ~ 684 MB of memory.
Node’s fs module can give us a readable stream for any file using the createReadStream method. We
can pipe that to the response object.
So, replace the request handler code with the below code snippet and measure the memory
consumption.
server.on("request", (req, res) => {
const src = fs.createReadStream("./big.file");
src.pipe(res);
});
Let’s run our server again,
> node index.jsThe server is running at localhost:8000
big.file created
Now, let’s curl the endpoint,
> curl localhost:8000
> curl localhost:8000Lorem ipsum dolor sit amet, consectetur adipiscing e......
............................
.......................
Now, look at the memory consumption for the server in the task manager.
Node server memory consumption while serving the file with streaming chunks
When we ran the server, it started out with a normal amount of memory, ~ 5.8 MB. Then we
connected to the server (curl). Note what happened to the memory consumed. The memory
consumption is just ~8 MB.
Now, what’s changed, and how is it working?
When a client asks for that big file, we stream it one chunk at a time, which means we
don’t buffer it in memory at all. The memory usage grew by about ~8 MB and that’s it.
These scenarios need not be for just an HTTP server; they may be applicable to cases such as file
content manipulation, big file creation, uploading files from client to server, or sending big audio
or video file to a client, etc.
Error Handling in Node.js Like a Pro
All you need to know to get started.
Photo by ThisisEngineering RAEng on Unsplash
Handling errors are one of the most important aspects of any production-grade application.
Anyone can code for the success cases. Only true professionals take care of the error cases.
Today we will learn just that. Let’s dive in.
First, we have to understand that not all errors are the same. Let’s see how many types of errors
can occur in an application.
•
User Generated Error
•
Hardware failure
•
Runtime Error
•
Database Error
We will see how we can easily handle these different types of errors.
This article is part of a series where I am building a ExpressJSBoilerplate from Scratch. You
can check that here
Get a basic express application
Run the following command to get a basic express application built with typescript.
git clone https://github.com/Mohammad-Faisal/express-typescript-skeleton.git
Handle not found URL errors
How do you detect if a hit URL is not active in your express application? You have an URL
like /users, but someone is hitting /user. We need to inform them that the URL they are trying to
access does not exist.
That’s easy to do in ExpressJS. After you define all the routes, add the following code to catch all
unmatched routes and send back a proper error response.
app.use("*", (req: Request, res: Response) => {
const err = Error(`Requested path ${req.path} not found`);
res.status(404).send({
success: false,
message: "Requested path ${req.path} not found",
stack: err.stack,
});
});
Here we are using “*” as a wildcard to catch all routes that didn’t go through our application.
Handle all errors with a special middleware
Now we have a special middleware in Express that handles all the errors for us. We have to include
it at the end of all the routes and pass down all the errors from the top level so that this
middleware can handle them for us.
The most important thing to do is keep this middleware after all other middleware and route
definitions because otherwise, some errors will slip away.
Let’s add it to our index file.
app.use((err: Error, req: Request, res: Response, next: NextFunction) => {
const statusCode = 500;
res.status(statusCode).send({
success: false,
message: err.message,
stack: err.stack,
});
});
Have a look at the middleware signature. Unline other middleware, This special middleware has
an extra parameter named err, which is of the Error type. This comes as the first parameter.
And modify our previous code to pass down the error like the following.
app.use("*", (req: Request, res: Response, next: NextFunction) => {
const err = Error(`Requested path ${req.path} not found`);
next(err);
});
Now, if we hit a random URL, something like, http://localhost:3001/posta, then we will get a
proper error response with the stack.
{
"success": false,
"message": "Requested path ${req.path} not found",
"stack": "Error: Requested path / not found\n
at
/Users/mohammadfaisal/Documents/learning/express-typescript-skeleton/src/index.ts:23:15\n"
}
Custom error object
Let’s have a closer look at the default error object provided by Node.js.
interface Error {
name: string;
message: string;
stack?: string;
}
So when you are throwing an error like the following.
throw new Error("Some message");
Then you are only getting the name and the optional stack properties with it. This stack provides
us with info on where exactly the error was produced. We don't want to include it in production.
We will see how to do that later.
But we may want to add some more information to the error object itself.
Also, we may want to differentiate between various error objects.
Let’s design a basic Custom error class for our application.
export class ApiError extends Error {
statusCode: number;
constructor(statusCode: number, message: string) {
super(message);
this.statusCode = statusCode;
Error.captureStackTrace(this, this.constructor);
}
}
Notice the following line.
Error.captureStackTrace(this, this.constructor);
It helps to capture the stack trace of the error from anywhere in the application.
In this simple class, we can append the statusCode as well. Let's modify our previous code like the
following.
app.use("*", (req: Request, res: Response, next: NextFunction) => {
const err = new ApiError(404, `Requested path ${req.path} not found`);
next(err);
});
And take advantage of the new statusCode property in the error handler middleware as well
app.use((err: ApiError, req: Request, res: Response, next: NextFunction) => {
const statusCode = err.statusCode || 500; // <- Look here res.status(statusCode).send({
success: false,
message: err.message,
stack: err.stack,
});
});
Having a custom-defined Error class makes your API predictable for end users to use. Most
newbies miss this part.
Let’s handle application errors
Now let’s throw a custom error from inside our routes as well.
app.get("/protected", async (req: Request, res: Response, next: NextFunction) => {
try {
throw new ApiError(401, "You are not authorized to access this!"); // <- fake error
} catch (err) {
next(err);
}
});
This is an artificially created situation where we need to throw an error. The real life, we may have
many situations where we need to use this kind of try/catch block to catch errors.
If we hit the following URL http://localhost:3001/protected, we will get the following response.
{
"success": false,
"message": "You are not authorized to access this!",
"stack": "Some details"
}
So our error response is working correctly!
Let’s improve on this!
So we now can handle our custom errors from anywhere in the application. But it requires a try
catch block everywhere and requires calling the next function with the error object.
This is not ideal. It will make our code look bad in no time.
Let’s create a custom wrapper function that will capture all the errors and call the next function
from a central place.
Let’s create a wrapper utility for this purpose!
import { Request, Response, NextFunction } from "express";export const asyncWrapper = (fn:
any) => (req: Request, res: Response, next: NextFunction) => {
Promise.resolve(fn(req, res, next)).catch((err) => next(err));
};
And use it inside our router.
import { asyncWrapper } from "./utils/asyncWrapper";app.get(
"/protected",
asyncWrapper(async (req: Request, res: Response) => {
throw new ApiError(401, "You are not authorized to access this!");
})
);
Run the code and see that we have the same results. This helps us to get rid of all try/catch blocks
and call the next function everywhere!
Example of a custom error
We can fine-tune our errors to our needs. Let’s create a new error class for the not found routes.
export class NotFoundError extends ApiError {
constructor(path: string) {
super(404, `The requested path ${path} not found!`);
}
}
And simplify our bad route handler.
app.use((req: Request, res: Response, next: NextFunction) => next(new
NotFoundError(req.path)));
How clean is that?
Now let’s install a small little package to avoid writing the status codes ourselves.
yarn add http-status-codes
And add the status code in a meaningful way.
export class NotFoundError extends ApiError {
constructor(path: string) {
super(StatusCodes.NOT_FOUND, `The requested path ${path} not found!`);
}
}
And inside our route like this.
app.get(
"/protected",
asyncWrapper(async (req: Request, res: Response) => {
throw new ApiError(StatusCodes.UNAUTHORIZED, "You are not authorized to access this!");
})
);
It just makes our code a bit better.
Handle programmer errors.
The best way to deal with programmer errors is to restart gracefully. Place the following line of
code at the end of your application. It will be invoked in case something is not caught in the error
middleware.
process.on("uncaughtException", (err: Error) => {
console.log(err.name, err.message);
console.log("UNCAUGHT EXCEPTION!
Shutting down...");
});
process.exit(1);
Handle unhandled promise rejections.
We can log the reason for the promise rejection. These errors never make it to our express error
handler. For Example, if we want to access a database with the wrong password.
process.on("unhandledRejection", (reason: Error, promise: Promise<any>) => {
console.log(reason.name, reason.message);
console.log("UNHANDLED REJECTION!
Shutting down...");
process.exit(1);
throw reason;
});
Further improvement
Let’s create a new ErrorHandler class to handle the errors in a central place.
import { Request, Response, NextFunction } from "express";
import { ApiError } from "./ApiError";export default class ErrorHandler {
static handle = () => {
return async (err: ApiError, req: Request, res: Response, next: NextFunction) => {
const statusCode = err.statusCode || 500;
res.status(statusCode).send({
success: false,
message: err.message,
rawErrors: err.rawErrors ?? [],
stack: err.stack,
});
};
};
}
This is just a simple error handler middleware. You can add your custom logic here. And use it
inside our index file.
app.use(ErrorHandler.handle());
That’s how we can separate the concerns by respecting the single responsibility principle of
SOLID.
I hope you learned something new today. Have a wonderful rest of your day!
Want to Connect?You can reach out to me via LinkedIN or my Personal Website
Is it time to ditch Svelte, React, and VUE?
Almost every modern web application built these days starts with an enormous clusterf*ck of
JavaScript on the front-end which literally replaces the entire browser viewport with a JSrendered virtual DOM and consumes JSON via a REST API which is built as a separate (but tightly
coupled) application. Sounds kinda crazy, right? Spoiler alert: That’s because it is totally f*cking
crazy!
If you’re building a Single Page Application (SPA) like maybe the next Figma or Trello, then one of
those tools might fit the bill perfectly. But if you’re building a Multi Page Application (MPA) like a
typical e-commerce website or even something like Gmail, I’m here to tell you that using a SPA
framework is likely adding far more complexity than it’s worth.
The trouble with SPA architecture
Using the server only as a “dumb” API means we can no longer easily rely on it to maintain our
application state. So we’ve moved all that state management to the client, inspiring a whole new
category of frameworks like Redux and MobX. And since we can no longer use the server for basic
routing, new libraries like React Router and Page.js were created to simulate the natural routing
functionality we used to get for free.
Authentication used to be trivially easy to implement with server-side sessions. With SPA
architecture, we typically use JSON Web Tokens which are far more difficult to implement (and far
easier to implement badly). Even basic form submission can no longer rely on the browser’s
standard implementation of HTML to submit form fields based on their name attributes. We're now
required to bind those values to a JS object and manage and submit that object "manually".
In other words, all this stuff we used to get for free, now requires quite a lot of extra work. But is it
worth it?
How did we get here?
In the olden days, the web was simple. Your browser sent an HTTP request, the server sent a new
document, and your browser dutifully rendered it to the viewport blowing away whatever was
there before. This was a bit clunky, though. It meant that if you wanted to update just one little
piece of the page, you had to re-render the entire thing. Then JQuery came along which made it
relatively simple to update only parts of the page using AJAX without a full-page refresh and to
build web applications which felt far more interactive and responsive — more “app-like”. But it
involved a lot of imperative JavaScript and was hard to maintain. If you wanted to make
something moderately complex, it didn’t take long before you had an unmaintainable rat’s nest of
JQuery.
Then along came Angular, followed by React and friends with a radical new approach: What if we
re-think the whole concept of a “front-end” not as a DOM sprinkled with JavaScript — but rather
as a JavaScript application which ultimately renders a DOM. Let’s turn it upside down! And it
worked brilliantly if what you wanted to build was a Single Page App. Sure you lost a lot
of the simplicity of a basic client/server architecture with HTML on the wire. But it freed you to
build a truly app-like frontend experience. This new approach was exciting — almost intoxicating.
And before long, every greenfield project looked like a good candidate for SPA.
But user expectations for a modern, reactive website have also increased dramatically over the past
5 or 10 years. So building a “web 1.0” style application with full page reloads just won’t cut it
anymore.
Modern UI without SPA
So how can we build a modern MPA website without using a SPA-frontend / REST-backend
architecture, without writing 80,000 lines of crufty JQuery, and without a janky full-page refresh
on every click like it was built circa 1999?
There’s a new crop of libraries designed to provide modern interactivity while working with the
grain of HTML and HTTP — both of which start with HT for Hypertext. This is key. The web was
designed with the idea of Hypertext going up and down the wire. Not JSON. New libraries like
Hotwire, HTMX, and Unpoly allow you to swap out chunks of your DOM in a declarative way by
adding HTML attibutes or tags to your markup — without writing any JavaScript yourself. For
example, an “Add to Cart” button could send a request to the server which modifies the serverside state of the cart items in your server-side session, then send back two chunks of DOM which
replace only the #cart-sidebar and #cart-icon-badge on the page. This can be done quite elegantly
and with beautiful CSS animations too.
When we send HTML down the wire as God (aka, Tim Berners-Lee) intended, it turns out there’s a
ton of stupid shit we no longer need. Things like client-side state management — the DOM is the
client-side state. Client-side routers? Don’t be ridiculous. JSON Web Tokens? Server sessions are
tried and true — and so much easier to implement. Our database queries become very easy too
since we’re writing all our routes on the server-side where we already have secure, direct access to
the database.
I wrote a simple ExpressJS-based framework to implement this style of architecture which you can
check out here: https://www.sanejs.dev
Ruby on Rails Shines in 2022 (no, really!)
Like most modern web developers, I’ve long shunned Ruby on Rails as a legacy framework
designed to build a style of monolithic web application which is no longer even relevant. But here’s
the thing: If we’re using something like Hotwire or HTMX on the frontend, we can use anything
we want for the backend. Since we’re working with the grain of HTML, ideally, we want the very
best system for creating server-rendered templates. There really aren’t that many full-featured,
batteries-included frameworks out there. The big ones are Rails, Django, and Laravel. There are a
few others up and coming such as Phoenix based on Elixer and Buffalo based on Go. But Rails has
a huge community, is very polished, and is honestly just a joy to work with.
But crucially, the latest Rails 7.0 released last December includes the incredible new Hotwire
library for frontend interactivity. Hotwire can be used with or without Rails — but it’s designed to
pair perfectly with Rails development and is baked-in by default. So believe it or not, in 2022, Rails
may now be the perfect full-stack framework for building post-jamstack era MPA web applications
with modern interactivity that works with the grain of HTML rather than replacing it wholesale
with the clusterf*ck of JS we’ve come to expect on the front-end plus a whole ‘nuther app for the
backend API.
Wrapping it all up
If the ultimate goal is to build modern MPA websites in a way that is fast, organized, and
maintainable, then it’s worth seriously considering whether SPA/Jamstack architecture is really
the right tool for the job. With the arrival of modern DOM-swapping interactivity libraries like
Hotwire, HTMX, and Unpoly, we finally have a real, practical alternative to SPA which allows us to
create modern, elegant interfaces that work with the grain of HTML meaning we don’t have to
reinvent the wheel for basic things like application state management and form submission. So if
we’re going back to server-rendered templates, then maybe it’s time to take another look at the
reigning all-time champion of web frameworks, Ruby on Rails. Especially now that the brand new
7.0 release comes with Hotwire baked-in, Rails just might be the very best solution in 2022 for
building modern Multi Page Applications.
Kafka with Node.Js
Photo by Erlend Ekseth on Unsplash
Kafka is one of the very efficient and popular event/message steaming platforms for building
microservices-based software applications, Kafka can work as the backbone of your microservices
so that different microservices can communicate with each other in an asynchronous fashion.
Integration with Node.JS — We are going to create a simple node.js application to consume
data from Wikimedia event stream API and put it inside a Kafka Topic using a producer and then
read the data from Kafka and store it inside ElasticSearch. below is the HLD of the sample
application
HLD of project
To work with Kafka we will be using the NPM module KafkaJS and to listen to the event stream we
will use eventsource module so let's get started
First, we will create a node project and install npm modules
npm i eventsource kafkajs
Our Project is ready now let's start with listening to the Wikimedia stream
import {
Kafka }
from
'kafkajs';
var EventSource = require('eventsource');
var es = new EventSource('https://stream.wikimedia.org/v2/stream/recentchange');
es.on('message', async (data: any, err: any) => {
const payload = JSON.parse(data.data);
console.log('Received Data: ', payload);
});
the above code will listen to the Wikimedia stream and then when a message is received it will log
in to the console, next step is to send the event inside Kafka so let's write our producer
import {
Kafka,
Producer }
from
'kafkajs';
export const getProducer = async () => {
const kafka = new Kafka({
clientId: 'producer-client',
brokers: ['localhost:9092'],
});
const producer: Producer = kafka.producer();
await producer.connect();
return producer;
};
We can use the above producer to send a message to Kafka using the following code, we will use it
inside our main file
await producer.send({
topic: 'wikimedia.recentchanges',
messages: [{ value: JSON.stringify(payload) }],
});
With producer setup being done now its time to build our consumer which will consume the
message publisher on ‘wikimedia.recentchanges’ topic from Kafka
import { ConsumerSubscribeTopic, Kafka } from 'kafkajs';
export const getConsumer = async () => {
const kafka = new Kafka({
clientId: 'consumer-client',
brokers: ['localhost:9092'],
});
const consumer = kafka.consumer({ groupId: 'my-group' });
const subscription: ConsumerSubscribeTopic = {
topic: 'wikimedia.recentchanges',
fromBeginning: false,
};
await consumer.connect();
await consumer.subscribe(subscription);
return consumer;
};
Now our producer and consumer are ready let's create an index file to run both producer and
consumer
import {
EachMessagePayload,
Message } from
'kafkajs';
import { getConsumer } from './consumer';
import { getProducer } from './producer';
var EventSource = require('eventsource');
var es = new EventSource('https://stream.wikimedia.org/v2/stream/recentchange');
const start = async () => {
const producer = await getProducer();
es.on('message', async (data: any, err: any) => {
const payload = JSON.parse(data.data);
console.log('Received Data: ', payload);
//publish the message to Kafka
await producer.send({
topic: 'wikimedia.recentchanges',
messages: [{ value: JSON.stringify(payload) }],
});
});
// Start of Consumer
const consumer = await getConsumer();
await consumer.run({
eachBatchAutoResolve: false,
eachMessage: async (messagePayload: EachMessagePayload) => {
const { topic, partition, message } = messagePayload;
const prefix = `${topic}[${partition} | ${message.offset}] / ${message.timestamp}`;
console.log(message);
},
});
};
start();
if you run our main file you will see that we are successfully able to send and consume messages to
and from Kafka, now let’s save the message inside ElasticSearch.
To work with elastic search we will be using the npm module @elastic/elasticsearch let's change
our main file to save Wikimedia changes to elastic search
import {
EachMessagePayload
} from 'kafkajs';
const { Client } = require('@elastic/elasticsearch');
import { getConsumer } from './consumer';
import { getProducer } from './producer';
var EventSource = require('eventsource');
var es = new EventSource('https://stream.wikimedia.org/v2/stream/recentchange');
const start = async () => {
const producer = await getProducer();
es.on('message', async (data: any, err: any) => {
const payload = JSON.parse(data.data);
console.log('Received Data: ', payload);
//publish the message to Kafka
await producer.send({
topic: 'wikimedia.recentchanges',
messages: [{ value: JSON.stringify(payload) }],
});
});
//Create the elastic search client
const elsticClient = new Client({
node: 'http://localhost:9200',
});
// Start of Consumer
const consumer = await getConsumer();
await consumer.run({
eachBatchAutoResolve: false,
eachMessage: async (messagePayload: EachMessagePayload) => {
const { topic, partition, message } = messagePayload;
const prefix = `${topic}[${partition} | ${message.offset}] / ${message.timestamp}`;
console.log(message);
// Publish the message to elastic
await elsticClient.index({
index: 'wikimedia_recentchanges',
document: message,
});
},
});
};
start();
Now if you run the main file and open Kibana you will see the documents are getting indexed
inside ElasticSearch index ‘wikimedia_recentchanges’
The complete source code of the project is available on Github.
Thanks for reading! I hope you find this article useful.
Node JS Event Loop And Custom Event
Node JS executes in a single process and single thread only, but the execution performance is very
high because of event-driven design and callback function usage. This article will introduce the
Node JS event loop mechanism and tell you how to create custom events in Node JS by example.
1. Node JS Event Loop Introduction
1. When you execute an asynchronous method provided by the Node JS framework, you should
provide a callback function to the method. You can continue to execute other js codes without
having to wait for the method to complete.
2. The callback function is an observer of the event queue. When the task is complete, the Node
JS framework will add the event back to the event queue, then the callback function which just
observes the event in the queue will be invoked to process the result.
3. An example of this design pattern is Node JS implemented HTTP web server. The HTTP web
server starts and listens for requests. When a request arrives, it will create an event in the queue
and then waiting for the next request.
4. When the previous request process is complete, the callback function will be invoked to send the
response back to the client.
5. This pattern is very high performance because web servers just waiting for requests and do not
do other things like IO operation which will cost time and resources.
2. Create And Listen Custom Event In Node JS
1. If you find Node JS built-in event is not enough for you, you can create a custom event and
bind the custom event process function with the event follow the below steps.
2. Include Node JS events built-in module.
var events = require('events');
3. Create an EventEmitter object use the above module
var event_emitter = new events.EventEmitter();
4. Create a JavaScript function as a callback function that will be triggered when the custom event
happen.
var data_receive_handler = function(){
console.log('Data received.');
}
5. Register custom events with the callback function. The callback function will observe the custom
event.
event_emitter.on('receive_data', data_receive_handler);
6. Trigger custom event then the callback function will be executed.
event_emitter.emit('receive_data');
3. Node JS Custom Event Example
1. custom-event.js
// Include node js prebuilt events module.
var events = require('events');
// Create event emitter object.
var event_emitter = new events.EventEmitter();
// Create connect event process function.
var connect_handler = function connected() {
console.log('Connect success.');
// Trigger receive_data event
event_emitter.emit('receive_data');
}
// Bind custom event connect with connect_handler process function.
event_emitter.on('connect', connect_handler);
// Create receive data event process function.
var data_receive_handler = function(){
console.log('Data received.');
}
// Bind custom event receive_data with data_receive_handler function.
event_emitter.on('receive_data', data_receive_handler);
// Trigger connect event.
event_emitter.emit('connect');
console.log("Code exit");
2. Run the below command to execute the above javascript source code in Node Js
$ node custom-event.js
3. Below is the above example output
Connect success.Data received.Code exit
So this was all in “Node JS Event Loop And Custom Event” ,I will be coming with more such
articles in the future.
Till then follow me for more !
2
Download