Uploaded by Rupak Yesmin

React & Algorithms Viva Material

advertisement
•
useState: This hook allows you to make components stateful. It returns a pair of values: the current state
and a function to update it. You can pass an initial state as an argument to useState, and use the updater
function to change the state with a new value. When the state changes, the component re-renders. For
example:
function Counter({ initialCount }) {
const [count, setCount] = useState(initialCount); // declare a state variable called
count
return (
<>
Count: {count}
<button onClick={() => setCount(initialCount)}>Reset</button>
<button onClick={() => setCount(prevCount => prevCount - 1)}>-</button>
<button onClick={() => setCount(prevCount => prevCount + 1)}>+</button>
</>
);
}
•
useEffect: This hook allows you to perform side effects in your component, such as fetching data,
subscribing to events, or updating the document title. You can pass a function as an argument to useEffect,
and it will run after every render by default. You can also pass a second argument, an array of dependencies,
to tell React when to re-run the effect. If the array is empty, the effect will only run once after the initial
render. You can also return a cleanup function from the effect to perform any necessary cleanup actions,
such as unsubscribing from events or clearing timers. For example:
function Timer() {
const [count, setCount] = useState(0);
useEffect(() => {
// set up a timer that increments the count every second
const timer = setTimeout(() => {
setCount(prevCount => prevCount + 1);
}, 1000);
// return a cleanup function that clears the timer
return () => {
clearTimeout(timer);
};
}, [count]); // only re-run the effect if count changes
return <h1>I've rendered {count} times!</h1>;
}
•
useContext: This hook allows you to access the value of a React context in your component. A context is
a way to share data across the component tree without passing props down manually. You can create a
context with React.createContext, and provide its value with a Context.Provider component. Then, you
can use useContext with the context object as an argument to get the current value of the context in your
component. Any changes to the context value will trigger a re-render of your component. For example:
// create a context for user data
const UserContext = React.createContext();
function App() {
// provide the user data as the value of the context
return (
<UserContext.Provider value={{ name: "Alice", age: 25 }}>
<Profile />
</UserContext.Provider>
);
}
function Profile() {
1
// access the user data from the context
const user = useContext(UserContext);
return (
<div>
<p>Name: {user.name}</p>
<p>Age: {user.age}</p>
</div>
);
}
•
useRef: This hook allows you to create and access a mutable ref object in your component. A ref object
has a current property that can hold any value, and it persists for the entire lifetime of the component. You
can use useRef with an initial value as an argument, and it will return the same ref object on every render.
You can use ref objects for various purposes, such as storing a reference to a DOM node, keeping track of
a previous value, or creating an instance variable. Mutating the current property of a ref object does not
cause a re-render. For example:
function TextInput() {
// create a ref object for the input element
const inputRef = useRef(null);
// focus the input element when the button is clicked
function handleClick() {
inputRef.current.focus();
}
return (
<>
<input ref={inputRef} type="text" />
<button onClick={handleClick}>Focus</button>
</>
);
}
•
useReducer: This hook allows you to manage complex state logic in your component using a reducer
function. A reducer is a function that takes the current state and an action, and returns a new state based
on the action type and payload. You can use useReducer with a reducer function and an initial state as
arguments, and it will return a pair of values: the current state and a dispatch function. You can use the
dispatch function to send actions to the reducer, and update the state accordingly. When the state changes,
the component re-renders. For example:
// define a reducer function for a counter component
function counterReducer(state, action) {
switch (action.type) {
case "increment":
return { count: state.count + 1 };
case "decrement":
return { count: state.count - 1 };
case "reset":
return { count: action.payload };
default:
return state;
}
}
function Counter({ initialCount }) {
// use the reducer function and the initial count as arguments
const [state, dispatch] = useReducer(counterReducer, { count: initialCount });
return (
<>
Count: {state.count}
<button onClick={() => dispatch({ type: "reset", payload: initialCount })}>
2
Reset
</button>
<button onClick={() => dispatch({ type: "decrement" })}>-</button>
<button onClick={() => dispatch({ type: "increment" })}>+</button>
</>
);
}
•
Prop drilling: This is a term for passing data from a parent component to a child component through
multiple intermediate components that do not need or use the data. This can make the code less readable,
less reusable, and more prone to errors. For example, if you have a component tree like this:
<App>
<Header>
<Nav>
<User />
</Nav>
</Header>
<Main />
</App>
And you want to pass the user data from App to User, you would have to pass it as a prop through Header and
Nav, even though they do not use it. This is prop drilling.
•
Context API: This is a feature of React that allows you to share data across the component tree without
passing props manually. You can create a context object with React.createContext, and provide its value
with a Context.Provider component. Then, you can use React.useContext hook to access the current value
of the context in any component that needs it. Any changes to the context value will trigger a re-render of
the components that use it. For example, if you want to share the user data across the app, you can create
a UserContext like this:
// create a context object for user data
const UserContext = React.createContext();
function App() {
// provide the user data as the value of the context
return (
<UserContext.Provider value={{ name: "Alice", age: 25 }}>
<Header />
<Main />
</UserContext.Provider>
);
}
function Header() {
// no need to pass user data as a prop
return (
<header>
<Nav />
</header>
);
}
function Nav() {
// no need to pass user data as a prop
return (
<nav>
<User />
</nav>
);
}
3
function User() {
// access the user data from the context
const user = React.useContext(UserContext);
return (
<div>
<p>Name: {user.name}</p>
<p>Age: {user.age}</p>
</div>
);
}
This way, you avoid prop drilling and make your code more concise and maintainable.
To answer your second question, useEffect does not store any data or state by itself. It only uses the data or
state that is available in your component scope, either from props or from useState. However, you can use
useEffect to update your state or props by calling setter functions or dispatch actions inside the effect function.
Redux is a JavaScript library for managing the state of your application. State is the data that changes over
time in your app, such as user input, server responses, or UI interactions. Redux helps you write applications
that behave consistently, run in different environments, and are easy to test and debug.
The core idea of Redux is that the whole state of your app is stored in a single object called the store. The
store is read-only, and the only way to change it is to dispatch actions, which are plain objects that describe
what happened in your app. For example, an action could be { type: “ADD_TODO”, text: “Learn Redux” }.
Actions are handled by pure functions called reducers, which take the previous state and an action, and return
a new state. For example, a reducer could be:
function todos(state = [], action) {
switch (action.type) {
case "ADD_TODO":
return [...state, { text: action.text, completed: false }];
case "TOGGLE_TODO":
return state.map((todo, index) => {
if (index === action.index) {
return { ...todo, completed: !todo.completed };
}
return todo;
});
default:
return state;
}
}
Reducers are composed together to form the root reducer, which defines the shape of the store. The store is
created by passing the root reducer to the createStore function from Redux. For example:
import { createStore } from "redux";
import rootReducer from "./reducers";
const store = createStore(rootReducer);
The store provides methods to access the current state, dispatch actions, and subscribe to changes. For example:
// get the current state
console.log(store.getState());
// dispatch an action
store.dispatch({ type: "ADD_TODO", text: "Learn Redux" });
4
// subscribe to changes
const unsubscribe = store.subscribe(() => {
console.log(store.getState());
});
// unsubscribe from changes
unsubscribe();
Redux can be used with any UI library or framework, such as React, Angular, or Vue. However, to connect
Redux with your UI components, you may need some additional libraries or tools. For example, to use Redux
with React, you can use React-Redux, which provides bindings and hooks to access the store and dispatch
actions from your components. For example:
import React from "react";
import { useSelector, useDispatch } from "react-redux";
function TodoList() {
// get the todos state from the store
const todos = useSelector((state) => state.todos);
// get the dispatch function from the store
const dispatch = useDispatch();
// define a function to handle click events on todo items
function handleToggle(index) {
// dispatch an action to toggle the todo's completed status
dispatch({ type: "TOGGLE_TODO", index });
}
// return some JSX that renders the todo list
return (
<ul>
{todos.map((todo, index) => (
<li
key={index}
onClick={() => handleToggle(index)}
style={{
textDecoration: todo.completed ? "line-through" : "none",
}}
>
{todo.text}
</li>
))}
</ul>
);
}
Redux is a small library with a simple, limited API designed to be a predictable container for application state.
It operates in a fashion similar to a reducing function, a functional programming concept. However, Redux
also has a large ecosystem of addons and tools that can help you with various aspects of state management,
such as debugging, middleware, async actions, data fetching, routing, persistence, and more.
Modals and conditional rendering are some common UI patterns that you can implement in React. Here is a
brief explanation of each one:
•
Modals: A modal is a component that displays some content on top of the main page, usually with a
backdrop that prevents interaction with the rest of the page. You can use the useState hook to store a
boolean value that indicates whether the modal is open or closed, and the setModalOpen function to toggle
it. You can also use some CSS styles to position and style the modal and the backdrop. For example:
import React, { useState } from "react";
5
function Modal() {
// declare a state variable called modalOpen and initialize it with false
const [modalOpen, setModalOpen] = useState(false);
// define a function that toggles the modalOpen value
function handleToggle() {
setModalOpen(!modalOpen);
}
// return some JSX that renders a button to open or close the modal, and the modal itself
if modalOpen is true
return (
<div>
<button onClick={handleToggle}>{modalOpen ? "Close" : "Open"} Modal</button>
{modalOpen && (
<div className="modal">
<div className="modal-content">
<h1>Modal</h1>
<p>This is a modal.</p>
<button onClick={handleToggle}>Close</button>
</div>
<div className="modal-backdrop" onClick={handleToggle}></div>
</div>
)}
</div>
);
}
•
Conditional rendering: Conditional rendering is a technique that allows you to render different elements
or components based on some condition. You can use JavaScript expressions, such as if statements, ternary
operators, logical operators, or switch statements, to implement conditional rendering in React. For
example:
import React from "react";
function Greeting({ name }) {
// return some JSX that renders a different greeting based on the name prop
return (
<div>
{name === "Alice" ? (
<h1>Hello, Alice!</h1>
) : name === "Bob" ? (
<h1>Hi, Bob!</h1>
) : (
<h1>Welcome, stranger!</h1>
)}
</div>
);
}
Props and props.children are two important concepts in React that allow you to pass data and elements from
a parent component to a child component. Here is a brief explanation of each one:
•
Props: Props are short for properties, and they are the way to pass data from a parent component to a child
component as attributes. For example, if you have a parent component called App and a child component
called Greeting, you can pass the name prop from App to Greeting like this:
function App() {
return <Greeting name="Alice" />;
}
function Greeting(props) {
6
return <h1>Hello, {props.name}!</h1>;
}
In this example, the name prop is a string, but props can be any valid JavaScript value, such as numbers,
arrays, objects, functions, etc. You can also pass multiple props to a child component by adding more attributes.
For example:
function App() {
return <Greeting name="Alice" age={25} />;
}
function Greeting(props) {
return (
<div>
<h1>Hello, {props.name}!</h1>
<p>You are {props.age} years old.</p>
</div>
);
}
Props are read-only, which means that you cannot modify them inside the child component. If you want to
change the value of a prop based on some event or logic, you need to use state or callbacks instead.
•
props.children: props.children is a special prop that represents the content between the opening and
closing tags of a component. For example, if you have a parent component called App and a child
component called Card, you can pass some elements as props.children from App to Card like this:
function App() {
return (
<Card>
<h1>Title</h1>
<p>Content</p>
</Card>
);
}
function Card(props) {
return <div className="card">{props.children}</div>;
}
In this example, the props.children value is an array of two elements: an h1 and a p. However, props.children
can also be a single element, a string, a number, or undefined, depending on what you pass between the tags.
You can use props.children to create generic or reusable components that can accept any content from the
parent component. For example, you can use props.children to create a modal component that can display
different content based on the context.
You can also manipulate or transform props.children using some helper methods from React.Children API,
such as React.Children.map, React.Children.forEach, React.Children.count, etc. For example, you can use
React.Children.map to add some styles or props to each child element like this:
function List(props) {
return (
<ul>
{React.Children.map(props.children, (child) => (
<li style={{ color: "red" }}>{child}</li>
))}
</ul>
);
}
7
Ternary operators are a way of writing concise conditional expressions in JavaScript. They can be used in
React to render different elements or components based on some condition. Here is a brief explanation of how
to use ternary operators in React:
•
Syntax: The syntax of a ternary operator is:
condition ? expressionIfTrue : expressionIfFalse
This means that if the condition is true, the expressionIfTrue will be evaluated and returned. Otherwise, the
expressionIfFalse will be evaluated and returned.
•
Example: Suppose you have a component called Greeting that takes a name prop and renders a different
message based on whether the name is Alice or not. You can use a ternary operator to implement this logic
like this:
function Greeting({ name }) {
return (
<div>
{name === "Alice" ? (
<h1>Hello, Alice!</h1>
) : (
<h1>Welcome, stranger!</h1>
)}
</div>
);
}
In this example, the ternary operator checks if the name prop is equal to “Alice”. If it is, it returns an h1
element with “Hello, Alice!”. If it is not, it returns an h1 element with “Welcome, stranger!”.
•
Advantages: Ternary operators can make your code more concise and readable than using if-else
statements or logical operators. They can also be nested or chained to handle multiple conditions. For
example:
function Status({ online, busy }) {
return (
<div>
{online ? (
busy ? (
<p>You are online and busy.</p>
) : (
<p>You are online and free.</p>
)
) : (
<p>You are offline.</p>
)}
</div>
);
}
In this example, the ternary operator checks if the online prop is true. If it is, it checks if the busy prop is true.
If it is, it returns a p element with “You are online and busy.”. If it is not, it returns a p element with “You are
online and free.”. If the online prop is false, it returns a p element with “You are offline.”.
•
Disadvantages: Ternary operators can also make your code less readable and maintainable if you use them
excessively or without proper formatting. They can also introduce bugs or unexpected behavior if you
forget to use parentheses or brackets to group your expressions. For example:
8
function Greeting({ name }) {
return (
<div>
{name === "Alice" ? <h1>Hello, Alice!</h1> : name === "Bob" ? <h1>Hi, Bob!</h1> :
<h1>Welcome, stranger!</h1>}
</div>
);
}
In this example, the ternary operator is used to check three conditions: if the name is Alice, Bob, or something
else. However, this code is hard to read and understand because of the lack of spacing and indentation. It can
also be confusing because of the order of evaluation: the first condition is checked first, then the second
condition is checked only if the first one is false, then the third condition is checked only if both the first and
second ones are false. A better way to write this code would be:
function Greeting({ name }) {
return (
<div>
{name === "Alice" ? (
<h1>Hello, Alice!</h1>
) : name === "Bob" ? (
<h1>Hi, Bob!</h1>
) : (
<h1>Welcome, stranger!</h1>
)}
</div>
);
}
This code is more readable and understandable because of the proper spacing and indentation. It also makes
clear the order of evaluation by using parentheses to group the expressions.
One reason why you may need to pass data from child to parent is when you want to use the data from the
child component in the parent component or share it with other sibling components. For example, if you have
a parent component that displays a list of items and a child component that allows the user to add a new item,
you may want to pass the new item from the child to the parent so that the parent can update the list and render
it accordingly.
Another reason why you may need to pass data from child to parent is when you want to notify the parent
component about some events or actions that happen in the child component. For example, if you have a
parent component that controls the visibility of a modal dialog and a child component that contains a button
to close the dialog, you may want to pass a signal from the child to the parent so that the parent can hide the
dialog.
There are several techniques for passing data from child to parent in React, depending on your use case and
preference. The most common and recommended technique is using callback functions as props. This
technique involves defining a function in the parent component that accepts the data from the child component
as an argument and updates the state or performs some actions accordingly. Then, passing this function as a
prop to the child component and calling it from the child component with the data as an argument whenever
the data changes or an event occurs. This way, you can establish a two-way communication between the parent
and the child components using props.
Another technique for passing data from child to parent is using context. This technique involves creating a
context object that contains the data and a setter function for updating it. Then, wrapping the parent and child
components with a context provider that passes the context object as a value prop. Finally, accessing and
updating the context object from any component using the useContext hook or the Consumer component. This
9
way, you can avoid prop drilling and pass data across multiple levels of components without explicitly passing
props.
A third technique for passing data from child to parent is using event emitters. This technique involves creating
an event emitter object that can emit and listen to custom events. Then, passing this object as a prop to both
the parent and child components and using it to communicate between them. For example, you can emit an
event with some data from the child component using emitter.emit (eventName, data) and listen to it in the
parent component using emitter.on (eventName, callback). This way, you can create a loosely-coupled
communication between components without relying on props or state.
Example of how to access the value of a variable in a child component from a parent component in React
using callback function. Imagine you have a parent component that displays a greeting message based on the
name entered in a child component. Here is how you can do it:
•
In the parent component, create a state variable called name that will store the name entered in the child
component. Also, create a function called handleNameChange that will update the state variable with the
value passed as an argument. Then, pass the function as a prop to the child component. For example:
import React, { useState } from "react";
import ChildComponent from "./ChildComponent";
function ParentComponent () {
const [name, setName] = useState ("");
const handleNameChange = (value) => {
setName (value);
};
return (
<div>
<h1>Parent Component</h1>
<p>Hello, {name || "there"}!</p>
<ChildComponent onNameChange= {handleNameChange} />
</div>
);
}
export default ParentComponent;
•
In the child component, call the function prop with the value of the input field as an argument whenever
the input field changes. For example:
import React from "react";
function ChildComponent ({ onNameChange }) {
const handleChange = (event) => {
onNameChange (event.target.value);
};
return (
<div>
<h2>Child Component</h2>
<input type="text" placeholder="Enter your name" onChange= {handleChange} />
</div>
);
}
export default ChildComponent;
10
This way, you can access the value of the input field in the child component from the parent component and
use it to display a greeting message.
Event emitter technique for passing data from child to parent in React. Here is the example:
•
•
Suppose you have a parent component that displays a list of tasks and a child component that contains a
form to add a new task. You want to pass the new task from the child to the parent so that the parent can
update the list and render it accordingly.
To use the event emitter technique, you need to do the following steps:
1. Import the EventEmitter package from “events” and create an event emitter object. For example,
import EventEmitter from "events"; const emitter = new EventEmitter ();
2. In the parent component, create a state variable that will store the list of tasks and a function that
will update the state variable with the new task passed as an argument. For example, const [tasks,
setTasks] = useState ([]); const handleNewTask = (task) => { setTasks ([...tasks, task]); };
3. In the parent component, use the useEffect hook to listen to the “add” event from the child
component and call the function with the data passed as an argument. For example, useEffect (()
=> { emitter.on ("add", (task) => { handleNewTask (task); }); }, [tasks]);
4. In the parent component, pass the event emitter object as a prop to the child component. For example,
<ChildComponent emitter= {emitter} />
5. In the child component, use the event emitter object prop to emit the “add” event with the new task
as an argument whenever the form is submitted. For example, const handleSubmit = (event) =>
{ event.preventDefault (); emitter.emit ("add", newTask); };
•
Here is the code for the parent component:
import React, { useState, useEffect } from "react";
import EventEmitter from "events"; // You need to install this package
import ChildComponent from "./ChildComponent";
// Create an event emitter object
const emitter = new EventEmitter ();
function ParentComponent () {
const [tasks, setTasks] = useState ([]);
// Define a function that will update the tasks state with the new task
const handleNewTask = (task) => {
setTasks ([...tasks, task]);
};
// Listen to the "add" event from the child component and call the function
useEffect (() => {
emitter.on ("add", (task) => {
handleNewTask (task);
});
}, [tasks]);
return (
<div>
<h1>Parent Component</h1>
<ul>
{tasks.map ((task, index) => (
<li key= {index}>{task}</li>
))}
</ul>
<ChildComponent emitter= {emitter} />
</div>
);
}
export default ParentComponent;
11
•
Here is the code for the child component:
import React, { useState } from "react";
function ChildComponent ({ emitter }) {
const [newTask, setNewTask] = useState ("");
// Define a function that will emit the "add" event with the new task
const handleSubmit = (event) => {
event.preventDefault ();
emitter.emit ("add", newTask);
setNewTask ("");
};
// Define a function that will update the new task state with the input value
const handleChange = (event) => {
setNewTask (event.target.value);
};
return (
<div>
<h2>Child Component</h2>
<form onSubmit= {handleSubmit}>
<input type="text" value= {newTask} onChange= {handleChange} />
<button type="submit">Add</button>
</form>
</div>
);
}
export default ChildComponent;
A callback function is a function that is passed as an argument to another function and is executed after the
other function has finished. Callback functions are useful for handling asynchronous operations, such as
fetching data from a server, waiting for user input, or performing time-consuming calculations. Here is an
example of a callback function in JavaScript:
// Define a function that takes a callback as an argument
function sayHello(callback) {
// Simulate a delay of 3 seconds
setTimeout(() => {
// Display a message
console.log("Hello, world!");
// Call the callback function
callback();
}, 3000);
}
// Define another function that will be used as a callback
function sayGoodbye() {
// Display another message
console.log("Goodbye, world!");
}
// Call the first function and pass the second function as a callback
sayHello(sayGoodbye);
In this example, the sayHello function takes a callback function as an argument and uses the setTimeout
method to delay its execution by 3 seconds. After the delay, it displays “Hello, world!” and then calls the
callback function. The sayGoodbye function is passed as a callback to the sayHello function and displays
“Goodbye, world!” after the sayHello function has finished.
12
React.memo, useMemo and useCallback are React features that help to optimize the performance of React
applications by avoiding unnecessary re-rendering of components or recalculating of values.
React.memo is a higher-order component that memorizes the output of a functional component and only rerenders it if the props have changed. This is useful when you have a component that renders the same result
given the same props, and you want to avoid wasting time and resources on rendering it again and again. For
example, you can wrap a component that displays a list of items with React.memo, and it will only re-render
if the items prop changes.
useMemo is a hook that memorizes the result of a function and only recomputes it if the dependencies have
changed. This is useful when you have a function that performs a costly computation or returns a complex
object, and you want to avoid repeating it on every render. For example, you can use useMemo to calculate
the sum of an array of numbers, and it will only recalculate it if the array changes.
useCallback is a hook that memorizes the instance of a function and only recreates it if the dependencies have
changed. This is useful when you have a function that you want to pass as a prop to a child component, and
you want to prevent unnecessary re-rendering of the child component due to reference inequality. For example,
you can use useCallback to wrap a function that handles a click event, and it will only create a new function
instance if the dependencies change.
The advantages of using React.memo, useMemo and useCallback are:
•
•
•
They can improve the performance of your React application by reducing the number of re-rendering or
recalculating operations.
They can help you manage the state and props of your components more efficiently and avoid unwanted
side effects.
They can make your code more readable and maintainable by separating the logic and presentation of your
components.
The disadvantages of using React.memo, useMemo and useCallback are:
•
•
•
They can introduce additional complexity and overhead to your code, especially if you use them
excessively or incorrectly.
They can make your code harder to debug or test, as they may hide some errors or bugs in your logic or
dependencies.
They can cause memory leaks or performance issues if you use them with improper dependencies or
cleanup functions.
Here are the examples:
•
React.memo: Suppose you have a component that displays a user’s name and age, and you want to avoid
re-rendering it unless the user’s data changes. You can wrap the component with React.memo, and it will
only re-render if the user prop changes. For example:
import React from "react";
// Define a component that displays a user's name and age
function UserComponent({ user }) {
return (
<div>
13
<p>Name: {user.name}</p>
<p>Age: {user.age}</p>
</div>
);
}
// Wrap the component with React.memo
export default React.memo(UserComponent);
•
useMemo: Suppose you have a component that displays the factorial of a number, and you want to avoid
recalculating it on every render. You can use useMemo to memoize the result of the factorial function, and
it will only recalculate it if the number changes. For example:
import React, { useMemo } from "react";
// Define a function that calculates the factorial of a number
function factorial(n) {
let result = 1;
for (let i = 1; i <= n; i++) {
result *= i;
}
return result;
}
// Define a component that displays the factorial of a number
function FactorialComponent({ number }) {
// Use useMemo to memoize the result of the factorial function
const result = useMemo(() => factorial(number), [number]);
return (
<div>
<p>The factorial of {number} is {result}</p>
</div>
);
}
export default FactorialComponent;
•
useCallback: Suppose you have a component that displays a counter and a button to increment it, and you
want to pass the increment function as a prop to another component. You can use useCallback to memoize
the instance of the increment function, and it will only create a new instance if the counter changes. For
example:
import React, { useState, useCallback } from "react";
import AnotherComponent from "./AnotherComponent";
// Define a component that displays a counter and a button to increment it
function CounterComponent() {
const [counter, setCounter] = useState(0);
// Use useCallback to memoize the instance of the increment function
const increment = useCallback(() => {
setCounter(counter + 1);
}, [counter]);
return (
<div>
<p>The counter is: {counter}</p>
<AnotherComponent onClick={increment} />
</div>
);
}
export default CounterComponent;
14
We use useState and useEffect hooks in React because they allow us to manage the state and side effects of
our components in a declarative and functional way. Unlike in Python or other languages, where we can
update our variables anytime we want by reassigning them, in React we have to follow some rules and
conventions to ensure that our components render correctly and efficiently.
The advantages of using useState and useEffect hooks in React are:
•
•
•
•
They make our code more readable and maintainable by separating the logic and presentation of our
components.
They enable us to use more of React’s features, such as context, reducers, custom hooks, etc., without
writing classes or using lifecycle methods.
They help us avoid common bugs and errors, such as stale closures, memory leaks, infinite loops, etc., by
following the rules of hooks and using dependencies correctly.
They improve the performance of our React applications by reducing the number of re-rendering or
recalculating operations.
The disadvantages of using useState and useEffect hooks in React are:
•
•
•
They can introduce additional complexity and overhead to our code, especially if we use them excessively
or incorrectly.
They can make our code harder to debug or test, as they may hide some errors or bugs in our logic or
dependencies.
They can cause unexpected behavior or side effects if we forget to use dependencies or cleanup functions
properly.
Some of the key rules why we cannot reassign variables in React like other languages are:
•
•
•
React uses a declarative and functional approach to manage the state and props of components. This means
that we should not mutate or reassign the variables that hold the state or props, but rather use the setState
or useEffect hooks to update them in a controlled way. This ensures that the components render correctly
and efficiently, and avoid common bugs and errors.
React follows the rules of hooks, which are a set of conventions and best practices for using hooks in React.
One of the rules is that we should not call hooks inside loops, conditions, or nested functions, but only at
the top level of our component functions. This ensures that the hooks are called in the same order on every
render, and avoid breaking the dependency tracking mechanism of React.
React relies on the immutability and reference equality of variables to optimize the performance of our
React applications. This means that we should not change the values or references of the variables that
hold the state or props, but rather create new copies or instances of them when we need to update them.
This allows React to use shallow comparison and memoization techniques to reduce the number of rerendering or recalculating operations.
There are many reasons why we use React over other JavaScript based frameworks. React is a popular and
powerful library that allows us to create user interfaces with ease and flexibility. Here are some of the main
advantages of using React:
•
React uses a virtual DOM, which is a representation of the real DOM in memory. This allows React to
update only the parts of the UI that have changed, instead of re-rendering the whole page. This improves
the performance, efficiency, and user experience of web applications.
15
•
•
•
React follows a component-based architecture, which means that it divides the UI into reusable and
independent pieces of code. This makes the code more modular, maintainable, and testable. It also enables
code reuse and sharing across different projects and platforms.
React supports server-side rendering, which means that it can render the UI on the server before sending
it to the browser. This enhances the SEO (search engine optimization) and accessibility of web applications,
as well as reduces the initial loading time.
React has a rich ecosystem of tools and libraries that extend its functionality and features. For example,
React Router for routing, Redux for state management, Next.js for static site generation, Material UI for
UI components, etc. These tools and libraries make web development easier and faster with React.
There is no definitive answer to why we should not use React over other JavaScript based frameworks, as
different frameworks may suit different needs and preferences. However, some possible reasons why React
may not be the best choice for some projects are:
•
•
•
•
React is constantly evolving and changing, which means that developers have to keep up with the latest
updates and features. This can be challenging and time-consuming, especially if there is a lack of
documentation or support for older versions.
React does not provide a complete solution for web development, but rather a library for building user
interfaces. This means that developers have to choose and integrate other technologies and tools, such as
routing, state management, testing, etc. This can add complexity and overhead to the project, and require
more skills and knowledge.
React uses JSX, which is a syntax extension that allows writing HTML-like code in JavaScript. This can
be a barrier for some developers who are not familiar with JSX or prefer to separate HTML and JavaScript.
JSX also requires a transpiler, such as Babel, to convert it into plain JavaScript that browsers can
understand.
React relies on the immutability and reference equality of variables to optimize the performance of web
applications. This means that developers have to follow some rules and conventions to avoid mutating or
reassigning the variables that hold the state or props of components. This can be confusing or inconvenient
for some developers who are used to working with mutable variables in other languages.
DOM and virtual DOM are concepts related to web development, especially in the React framework. DOM
stands for Document Object Model, which is a representation of the structure and content of a web page.
Virtual DOM is a lightweight copy of the DOM that is created and updated in memory, and synced with the
real DOM by a library such as ReactDOM. The purpose of using virtual DOM is to improve the performance
and efficiency of web applications, as it allows to update only the parts of the UI that have changed, instead
of re-rendering the whole page.
The three dots (…) in JavaScript are called the spread operator or the rest operator, depending on how and
where you use them. They allow you to expand an iterable (such as an array, a string, or an object) into
individual elements, or to collect multiple elements into a single array1.
Some common use cases for the three dots in JavaScript are:
•
•
•
•
•
Copying an array or an object without mutating the original one.
Concatenating or merging multiple arrays or objects into a new one.
Passing an array as arguments to a function.
Destructuring an array or an object and assigning the remaining elements to a variable.
Separating a string into individual characters.
Here are some examples of how to use the three dots in JavaScript:
16
// Copying an array with the spread operator
let fruits = ["apple", "banana", "cherry"];
let copy = [...fruits]; // creates a new array with the same elements
console.log(copy); // ["apple", "banana", "cherry"]
// Merging two arrays with the spread operator
let numbers = [1, 2, 3];
let letters = ["a", "b", "c"];
let combined = [...numbers, ...letters]; // creates a new array with all elements
console.log(combined); // [1, 2, 3, "a", "b", "c"]
// Passing an array as arguments to a function with the spread operator
let scores = [10, 20, 30];
let max = Math.max(...scores); // equivalent to Math.max(10, 20, 30)
console.log(max); // 30
// Destructuring an array with the rest operator
let colors = ["red", "green", "blue", "yellow"];
let [first, second, ...rest] = colors; // assigns the first two elements to variables and
the rest to an array
console.log(first); // "red"
console.log(second); // "green"
console.log(rest); // ["blue", "yellow"]
// Separating a string into characters with the spread operator
let name = "Alice";
let chars = [...name]; // creates an array of characters
console.log(chars); // ["A", "l", "i", "c", "e"]
There are many JavaScript frameworks that are popular and widely used for web development. Here are some
brief introductions to a few of them, how they work, and their core advantages and disadvantages.
•
•
React: React is an open-source JavaScript library developed by Facebook and used to build highly
responsive user interfaces. It is declarative and component-based, meaning you can reuse components to
create complex UIs in a short time. React uses a virtual DOM, which is a representation of the real DOM
in memory. This allows React to update only the parts of the UI that have changed, instead of re-rendering
the whole page. This improves the performance, efficiency, and user experience of web applications.
o Advantages: React is easy to learn and use, as it has a simple syntax and a large community of
developers. It also has a rich ecosystem of tools and libraries that extend its functionality and
features, such as React Router, Redux, Next.js, Material UI, etc.
o Disadvantages: React does not provide a complete solution for web development, but rather a library
for building user interfaces. This means that developers have to choose and integrate other
technologies and tools, such as routing, state management, testing, etc. This can add complexity and
overhead to the project, and require more skills and knowledge.
Angular: Angular is an open-source JavaScript framework developed by Google and used to build singlepage applications (SPAs). It is based on TypeScript, which is a superset of JavaScript that adds static
typing and other features. Angular uses a model-view-controller (MVC) architecture, which separates the
data, logic, and presentation layers of the application. Angular also uses dependency injection, which is a
technique that allows injecting dependencies into components or services without hard-coding them.
o Advantages: Angular is a powerful and comprehensive framework that provides everything you
need to build complex web applications, such as routing, forms, animations, testing, etc. It also has
a strong support from Google and a large community of developers. It also supports server-side
rendering, which enhances the SEO and accessibility of web applications.
o Disadvantages: Angular is a complex and opinionated framework that has a steep learning curve
and requires familiarity with TypeScript and other concepts. It also has a large size and can be slow
to load or run on some devices or browsers. It also has frequent updates and changes that can be
hard to keep up with.
17
•
Vue: Vue is an open-source JavaScript framework used to build user interfaces and SPAs. It is progressive
and adaptable, meaning you can use it as a simple library or as a full-featured framework depending on
your needs. Vue uses a template syntax that allows writing HTML-like code in JavaScript. Vue also uses
a reactive data system, which automatically updates the UI when the data changes without requiring any
additional code.
o Advantages: Vue is lightweight and fast, as it has a small size and a virtual DOM implementation.
It is also easy to learn and use, as it has a simple syntax and a clear documentation. It also has a
flexible and modular structure that allows integrating with other libraries or tools easily. It also has
a vibrant community of developers who contribute to its improvement and support.
o Disadvantages: Vue is relatively new and less mature than other frameworks, which means it may
lack some features or stability. It also has less support from big companies or organizations than
other frameworks. It also has some compatibility issues with older browsers or devices.
•
•
•
Svelte: Svelte is a JavaScript framework that compiles the components into vanilla JavaScript code at
build time, instead of running them in the browser at runtime. This means that Svelte does not need a
virtual DOM or a framework library to update the UI, resulting in faster and smaller web applications.
Svelte also supports reactive programming, which automatically updates the UI when the data changes
without requiring any additional code.
Nuxt.js: Nuxt.js is a JavaScript framework based on Vue.js that simplifies the development of universal
or server-side rendered web applications. Nuxt.js provides a set of features and conventions that help
developers create SEO-friendly and performant web applications with minimal configuration. Some of
these features include routing, code splitting, prefetching, caching, etc.
Meteor: Meteor is a JavaScript framework that enables full-stack web development using the same
language and codebase for both the front-end and the back-end. Meteor also supports real-time data
synchronization, which means that any changes made to the data on the server are instantly reflected
on the UI without requiring any page reloads or extra code. Meteor also integrates with popular
JavaScript libraries and frameworks, such as React, Angular, Vue, etc.
Bubble Sort: A simple sorting algorithm that repeatedly compares adjacent elements in an array and swaps
them if they are in the wrong order. It has a worst-case and average time complexity of O(n^2), where n is the
number of elements in the array, and a best-case time complexity of O(n) when the array is already sorted. It
has a space complexity of O(1), meaning it does not require any extra space to sort the array. It is a stable
sorting algorithm, meaning it preserves the relative order of equal elements in the sorted array.
Example: Suppose we want to sort the following array in ascending order using bubble sort:
[5, 3, 8, 2, 1, 4]
We start by comparing the first two elements, 5 and 3. Since 5 is greater than 3, we swap them. The array
becomes:
[3, 5, 8, 2, 1, 4]
We then compare the next two elements, 5 and 8. Since 5 is less than 8, we do not swap them. The array
remains:
[3, 5, 8, 2, 1, 4]
We continue this process until we reach the end of the array. The array after the first pass is:
18
[3, 5, 2, 1, 4, 8]
Notice that the largest element, 8, has bubbled up to the last position. We repeat this process for the remaining
n-1 elements, ignoring the last element as it is already in its correct position. The array after the second pass
is:
[3, 2, 1, 4, 5, 8]
The array after the third pass is:
[2, 1, 3, 4, 5, 8]
The array after the fourth pass is:
[1, 2, 3, 4, 5, 8]
The array is now sorted and we do not need to perform any more passes. The total number of comparisons
made is (n-1) + (n-2) + … + 1 = n(n-1)/2 = O(n^2). The total number of swaps made is at most n(n-1)/2 =
O(n^2).
Here is a possible Python implementation of bubble sort:
def bubble_sort(arr):
# loop through all elements except the last one
for i in range(len(arr) - 1):
# flag to check if any swap occurred in this pass
swapped = False
# loop through all elements from index i+1 to the end
for j in range(i + 1, len(arr)):
# compare the current element with the previous one
if arr[j] < arr[j - 1]:
# swap them if they are in the wrong order
arr[j], arr[j - 1] = arr[j - 1], arr[j]
# set the flag to True
swapped = True
# if no swap occurred in this pass, the array is already sorted
if not swapped:
break
# return the sorted array
return arr
Selection Sort: A simple sorting algorithm that repeatedly finds the minimum element in the unsorted portion
of the array and swaps it with the first unsorted element. It has a worst-case and average time complexity of
O(n^2), where n is the number of elements in the array. It has a space complexity of O(1), meaning it does
not require any extra space to sort the array. It is not a stable sorting algorithm, meaning it may change the
relative order of equal elements in the sorted array.
Example: Suppose we want to sort the following array in ascending order using selection sort:
[5, 3, 8, 2, 1, 4]
We start by finding the minimum element in the entire array. The minimum element is 1, which is at index 4.
We swap it with the first element at index 0. The array becomes:
[1, 3, 8, 2, 5, 4]
19
We then find the minimum element in the remaining unsorted portion of the array (from index 1 to index 5).
The minimum element is 2, which is at index 3. We swap it with the first unsorted element at index 1. The
array becomes:
[1,2, 8, 3, 5, 4]
We continue this process until we reach the end of the array. The array after the third swap is:
[1, 2, 3, 8, 5, 4]
The array after the fourth swap is:
[1, 2, 3, 4, 8, 5]
The array after the fifth and final swap is:
[1, 2, 3, 4, 5, 8]
The array is now sorted and we do not need to perform any more swaps. The total number of comparisons
made is (n-1) + (n-2) + … + 1 = n(n-1)/2 = O(n^2). The total number of swaps made is at most n-1 = O(n).
Here is a possible Python implementation of selection sort:
def selection_sort(arr):
# loop through all elements except the last one
for i in range(len(arr) - 1):
# find the index of the minimum element in the unsorted portion of the array
min_index = i
for j in range(i + 1, len(arr)):
if arr[j] < arr[min_index]:
min_index = j
# swap the minimum element with the first unsorted element
arr[i], arr[min_index] = arr[min_index], arr[i]
# return the sorted array
return arr
Insertion Sort: A simple sorting algorithm that builds up a sorted subarray from left to right by inserting each
new element into its correct position in the subarray. It has a worst-case and average time complexity of
O(n^2), where n is the number of elements in the array, and a best-case time complexity of O(n) when the
array is already sorted. It has a space complexity of O(1), meaning it does not require any extra space to sort
the array. It is a stable sorting algorithm, meaning it preserves the relative order of equal elements in the sorted
array.
Example: Suppose we want to sort the following array in ascending order using insertion sort:
[5, 3, 8, 2, 1, 4]
We start by assuming that the first element at index 0 is already sorted. We then move on to the second element
at index 1, which is 3. We compare it with the previous element at index 0, which is 5. Since 3 is less than 5,
we shift 5 to the right and insert 3 in its place. The array becomes:
[3, 5, 8, 2, 1, 4]
20
We then move on to the third element at index 2, which is 8. We compare it with the previous element at
index 1, which is 5. Since 8 is greater than 5, we do not need to shift or insert anything. The array remains:
[3, 5, 8, 2, 1, 4]
We continue this process until we reach the end of the array. The array after inserting the fourth element at
index 3, which is 2, is:
[2, 3, 5, 8, 1, 4]
The array after inserting the fifth element at index 4, which is 1, is:
[1, 2, 3, 5, 8, 4]
The array after inserting the sixth and final element at index 5, which is 4, is:
[1, 2, 3, 4, 5, 8]
The array is now sorted and we do not need to perform any more insertions. The total number of comparisons
made is at most (n-1) + (n-2) + … + 1 = n(n-1)/2 = O(n^2). The total number of shifts and insertions made is
also at most n(n-1)/2 = O(n^2).
Here is a possible Python implementation of insertion sort:
def insertion_sort(arr):
# loop through all elements except the first one
for i in range(1, len(arr)):
# store the current element in a temporary variable
temp = arr[i]
# initialize a pointer to the previous element
j = i - 1
# loop through the sorted subarray from right to left
while j >= 0 and arr[j] > temp:
# shift the larger element to the right
arr[j + 1] = arr[j]
# decrement the pointer
j -= 1
# insert the current element into its correct position in the subarray
arr[j + 1] = temp
# return the sorted array
return arr
Counting Sort: A non-comparison-based sorting algorithm that counts the number of occurrences of each
unique element in the array. The count is stored in an auxiliary array and the sorting is done by mapping the
count as an index of the auxiliary array. It has a worst-case and average time complexity of O(n+k), where n
is the number of elements in the array and k is the range of the key values. It has a space complexity of O(n+k),
meaning it requires an extra array of size n+k to store the count and the output. It is a stable sorting algorithm,
meaning it preserves the relative order of equal elements in the sorted array.
Example: Suppose we want to sort the following array in ascending order using counting sort:
[1, 4, 1, 2, 7, 5, 2]
We assume that the range of the key values is from 0 to 9. We create an auxiliary array of size 10 to store the
count of each key value. We initialize the count array with zeros. The count array is:
21
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
We then loop through the input array and increment the count of each element by one. For example, when we
encounter the first element 1, we increase the count at index 1 by one. The count array becomes:
[0, 1, 0, 0, 0, 0, 0, 0, 0, 0]
We continue this process until we finish counting all the elements in the input array. The final count array is:
[0, 2, 2, 0, 1, 1, 0, 1, 0, 0]
The count array tells us how many times each element appears in the input array. For example, there are two
occurrences of 1, two occurrences of 2, one occurrence of 4, and so on.
We then modify the count array such that each element at each index stores the sum of previous counts. This
gives us the cumulative count of each element. For example, the count at index 2 is now 4, which means there
are four elements that are less than or equal to 2 in the input array. The modified count array is:
[0, 2, 4, 4, 5, 6, 6, 7, 7, 7]
The modified count array indicates the position of each element in the output array. For example, the count at
index 4 is now 5, which means that element with key value 4 should be placed at index 5-1 = 4 in the output
array.
We then create an output array of size n to store the sorted elements. We loop through the input array from
right to left and find the index of each element in the count array. We then place that element at that index in
the output array and decrement its count by one. For example, when we encounter the last element in the input
array which is 2, we find its index in the count array which is 4. We place it at index 4-1 =3 in the output array
and decrease its count by one. The output array becomes:
[_, _, _, 2, _, _, _]
The count array becomes:
[0, 2, 3, 4, 5, 6, 6, 7, 7, 7]
We repeat this process until we finish placing all the elements in their correct positions in the output array.
The final output array is:
[1, 1, 2, 2, 4, 5, 7]
The final count array is:
[-1, -1, -1, -1, -1, -1, -1 , -1 , -1 , -1 ]
The output array is now sorted and we do not need to use the count array anymore. The total number of loops
performed is n+k = O(n+k). The total number of extra space used is n+k = O(n+k).
Here is a possible Python implementation of counting sort:
def counting_sort(arr):
22
# find the maximum key value in arr
k = max(arr)
# create a count array of size k+1
# and initialize it with zeros
count = [0] * (k + 1)
# create an output array of same size as arr
output = [0] * len(arr)
# loop through arr and increment the count of each element
for x in arr:
count[x] += 1
# modify the count array such that each element
# stores the sum of previous counts
for i in range(1, k + 1):
count[i] += count[i - 1]
# loop through arr from right to left and find the index
# of each element in the count array
# place that element at that index in the output array
# and decrement its count by one
for x in reversed(arr):
output[count[x] - 1] = x
count[x] -= 1
# return the sorted output array
return output
Linear Search: A simple search algorithm that finds an element in a list by searching the element sequentially
until the element is found or the end of the list is reached. It has a worst-case and average time complexity of
O(n), where n is the number of elements in the list, and a best-case time complexity of O(1) when the element
is the first one in the list. It does not require any extra space or any prior sorting of the list. It is not very
efficient for large or unsorted lists.
Example: Suppose we want to find the element 7 in the following list:
[5, 3, 8, 2, 1, 4]
We start by comparing the first element, 5, with the target element, 7. Since they are not equal, we move on
to the next element, 3. We compare 3 with 7 and find that they are not equal either. We continue this process
until we reach the fifth element, 1. We compare 1 with 7 and find that they are not equal as well. We move
on to the last element, 4. We compare 4 with 7 and find that they are not equal too. Since we have reached the
end of the list and have not found the target element, we return -1 to indicate that the element is not present
in the list.
Here is a possible Python implementation of linear search:
def linear_search(lst, target):
# loop through all elements in the list
for i in range(len(lst)):
# compare each element with the target
if lst[i] == target:
# return the index if found
return i
# return -1 if not found
return -1
Binary Search: A more efficient search algorithm that finds an element in a sorted list by repeatedly dividing
the list into two halves and comparing the middle element with the target element. If the middle element is
equal to the target element, it returns its index. If the middle element is greater than the target element, it
discards the right half of the list and repeats the process on the left half. If the middle element is less than the
target element, it discards the left half of the list and repeats the process on the right half. It has a worst-case
23
and average time complexity of O(log n), where n is the number of elements in the list, and a best-case time
complexity of O(1) when the element is the middle one in the list. It requires an extra space of O(1) to store
variables and a prior sorting of the list. It is very efficient for large and sorted lists.
Example: Suppose we want to find the element 7 in the following sorted list:
[1, 2, 3, 4, 5, 8]
We start by finding the middle element of the list, which is at index (0 + 5) / 2 = 2. The middle element is 3.
We compare 3 with 7 and find that 3 is less than 7. This means that 7 cannot be in the left half of the list, so
we discard it and focus on the right half. The right half of the list is:
[4, 5, 8]
We repeat the process on this sublist. We find its middle element at index (3 + 5) / 2 = 4. The middle element
is 5. We compare 5 with 7 and find that 5 is less than 7. This means that 7 cannot be in this sublist either, so
we discard it and focus on its right half. The right half of this sublist is:
[8]
We repeat the process on this single-element sublist. We find its middle element at index (5 + 5) /2 =5 . The
middle element is 8. We compare 8 with 7 and find that they are not equal either. Since there are no more
elements to divide or compare, we conclude that 7 is not present in the list and we return -1 to indicate that.
Here is a possible Python implementation of binary search:
def binary_search(lst, target):
# initialize the left and right pointers
left = 0
right = len(lst) - 1
# loop until the pointers cross or the target is found
while left <= right:
# find the middle element of the current sublist
mid = (left + right) // 2
# compare it with the target
if lst[mid] == target:
# return the index if found
return mid
elif lst[mid] > target:
# discard the right half if the middle element is greater than the target
right = mid - 1
else:
# discard the left half if the middle element is less than the target
left = mid + 1
# return -1 if not found
return -1
Divide and Conquer Approach: A divide and conquer approach is a strategy of solving a large problem by
breaking it into smaller sub-problems, solving the sub-problems recursively, and combining their solutions to
get the final solution of the original problem. It is an algorithm design paradigm that can be applied to many
types of problems, such as sorting, searching, multiplying, parsing, etc. It has some advantages over other
approaches, such as:
•
•
It can reduce the time complexity of some problems by exploiting their structure or properties.
It can simplify the implementation of some algorithms by dividing them into smaller and easier steps.
24
•
It can parallelize some algorithms by distributing the sub-problems among different processors or
machines.
However, it also has some disadvantages, such as:
•
•
•
It may increase the space complexity of some algorithms by requiring extra memory to store intermediate
results or recursive calls.
It may introduce some overhead or complexity in combining the sub-solutions or handling the base cases.
It may not be suitable for some problems that do not have a clear way of dividing them into sub-problems
or combining their sub-solutions.
Merge Sort: A sorting algorithm that divides an array into two halves, recursively sorts each half, and then
merges them into a sorted array. It is based on the divide and conquer approach, where a large problem is
broken down into smaller sub-problems, solved recursively, and combined to get the final solution. It has a
time complexity of O(n log n) and a space complexity of O(n), where n is the number of elements in the array.
It is a stable sorting algorithm, meaning it preserves the relative order of equal elements in the sorted array.
Example: Suppose we want to sort the following array in ascending order using merge sort:
[38, 27, 43, 3, 9, 82, 10]
We start by dividing the array into two halves:
[38, 27, 43] [3, 9, 82, 10]
We then recursively sort each half using merge sort. To sort the left half [38, 27, 43], we divide it again into
two halves:
[38] [27, 43]
We sort each half recursively. The left half [38] is already sorted as it has only one element. The right half
[27, 43] is divided again into two halves:
[27] [43]
Both halves are already sorted as they have only one element each. We then merge them into a sorted array:
[27, 43]
We then merge the sorted left half [38] and the sorted right half [27, 43] into a sorted array:
[27, 38, 43]
We have now sorted the left half of the original array. We repeat the same process for the right half [3, 9, 82,
10]. We divide it into two halves:
[3, 9] [82, 10]
We sort each half recursively. The left half [3, 9] is divided again into two halves:
3 [9]
25
Both halves are already sorted as they have only one element each. We then merge them into a sorted array:
[3, 9]
The right half [82, 10] is divided again into two halves:
[82] [10]
Both halves are already sorted as they have only one element each. We then merge them into a sorted array:
[10, 82]
We then merge the sorted left half [3, 9] and the sorted right half [10, 82] into a sorted array:
[3, 9, 10, 82]
We have now sorted the right half of the original array. We then merge the sorted left half [27, 38, 43] and
the sorted right half [3, 9, 10, 82] into a sorted array:
[3, 9, 10, 27, 38, 43, 82]
We have now sorted the original array using merge sort.
Here is a possible Python implementation of merge sort:
def merge_sort(arr):
# base case: if the array has one or zero elements, it is already sorted
if len(arr) <= 1:
return arr
# recursive case: divide the array into two halves and sort them recursively
mid = len(arr) // 2 # find the middle index
left = merge_sort(arr[:mid]) # sort the left half
right = merge_sort(arr[mid:]) # sort the right half
# merge the sorted halves into a sorted array
return merge(left, right)
def merge(left, right):
# initialize an empty array to store the merged result
result = []
# initialize two pointers to track the indices of the left and right
i = j = 0
# loop until one of the subarrays is exhausted
while i < len(left) and j < len(right):
# compare the current elements of the left and right subarrays
if left[i] <= right[j]:
# append the smaller element to the result and increment its
result.append(left[i])
i += 1
else:
# append the smaller element to the result and increment its
result.append(right[j])
j += 1
# append the remaining elements of the non-exhausted subarray to the
result.extend(left[i:])
result.extend(right[j:])
# return the merged result
return result
subarrays
pointer
pointer
result
26
Maximum Subarray Problem: The maximum subarray problem is the task of finding the contiguous subarray with the largest sum, within a given
one-dimensional array A [1…n] of numbers. Formally, the task is to find indices i and j with 1 <= i <= j <= n, such that the sum A [i] + A [i+1] +
… + A [j] is as large as possible. The subarray can be empty, in which case its sum is zero. Each number in the input array A can be positive,
negative, or zero.
For example, for the array of values [-2, 1, -3, 4, -1, 2, 1, -5, 4], the contiguous subarray with the largest sum is [4, -1, 2, 1], with sum 6.
Some properties of this problem are:
•
•
•
If the array contains all non-negative numbers, then the problem is trivial; the maximum subarray is the entire array.
If the array contains all non-positive numbers, then the solution is either the empty subarray or any subarray of size one containing the maximal
value of the array.
Several different subarrays may have the same maximum sum.
This problem can be solved using several different algorithmic techniques, including brute force, divide and conquer, dynamic programming, and
reduction to shortest paths. A simple and efficient algorithm known as Kadane’s algorithm solves it in linear time and constant space.
Example: Suppose we want to find the maximum subarray in the following array:
[-2, -3, 4, -1, -2, 1, 5, -3]
We use Kadane’s algorithm to solve this problem. The idea is to keep track of two variables: the current maximum sum ending at each position in
the array (curr_max), and the global maximum sum seen so far (max_so_far). We initialize both variables to zero. We then loop through each
element in the array and update these variables as follows:
•
•
We add the current element to curr_max. If curr_max becomes negative after adding the current element, we reset it to zero. This means that
we discard any negative sum subarray and start a new subarray from the next element.
We compare curr_max with max_so_far. If curr_max is greater than max_so_far, we update max_so_far to curr_max. This means that we have
found a new subarray with a larger sum than any previous subarray.
At the end of the loop, max_so_far will contain the maximum subarray sum. Here is how these variables change for each element in the array:
Element
curr_max
max_so_far
-2
0
0
-3
0
0
4
4
4
-1
3
4
-2
1
4
1
2
4
5
7
7
-3
4
7
The final value of max_so_far is 7, which is the maximum subarray sum. The corresponding subarray is [4, -1, -2, 1, 5].
Here is a possible Python implementation of Kadane’s algorithm:
def max_subarray(arr):
# initialize curr_max and max_so_far to zero
curr_max = max_so_far = 0
# loop through each element in arr
for x in arr:
# add x to curr_max
curr_max += x
# if curr_max becomes negative after adding x
if curr_max < 0:
# reset curr_max to zero
curr_max = 0
# compare curr_max with max_so_far
if curr_max > max_so_far:
# update max_so_far to curr_max
max_so_far = curr_max
# return max_so_far as the maximum subarray sum
return max_so_far
27
Quick Sort: A sorting algorithm that partitions an array around a pivot element, recursively sorts each
partition, and then concatenates them into a sorted array. It is based on the divide and conquer approach,
where a large problem is broken down into smaller sub-problems, solved recursively, and combined to get the
final solution. It has an average time complexity of O(n log n) and a worst-case time complexity of O(n^2),
where n is the number of elements in the array. It has a space complexity of O(log n) for the recursive calls.
It is not a stable sorting algorithm, meaning it does not preserve the relative order of equal elements in the
sorted array.
Example: Suppose we want to sort the following array in ascending order using quick sort:
[10, 80, 30, 90, 40]
We start by choosing a pivot element from the array. There are different ways to choose a pivot element, such
as the first element, the last element, a random element, or the median element. For simplicity, we choose the
last element as the pivot in this example. The pivot element is 40.
We then partition the array into two subarrays: one that contains all the elements that are less than or equal to
the pivot, and one that contains all the elements that are greater than the pivot. We also place the pivot element
in its correct position in the sorted array. To do this, we use two pointers: one that tracks the index of the
smaller or equal elements (i), and one that tracks the index of the larger elements (j). We initialize i to -1 and
j to 0. We then loop through each element in the array (except for the pivot) and compare it with the pivot. If
it is smaller or equal to the pivot, we increment i by one and swap the current element with arr[i]. If it is larger
than the pivot, we do nothing. At the end of the loop, we swap arr[i+1] with arr[n-1], where n is the size of
the array. This places the pivot element in its correct position and partitions the array accordingly. Here is
how these variables change for each element in the array:
Element
10
80
30
90
Swap
i
0
0
1
1
j
0
1
2
3
arr
[10, 80, 30, 90, 40]
[10, 80, 30, 90, 40]
[10, 30, 80, 90, 40]
[10, 30, 80, 90, 40]
[10, 30, 40, 90, 80]
The final value of i is 1 and j is n-1 =4 . We swap arr[i+1] with arr[n-1], which are arr2 and arr[4], respectively.
This places the pivot element (40) in its correct position (index 2) and partitions the array into two subarrays:
[10, 30] and [90, 80].
We then recursively sort each subarray using quick sort. To sort the left subarray [10, 30], we choose its last
element as the pivot (30) and partition it accordingly:
[10] [30]
The final value of i is -1 and j is n-1 =0. We swap arr[i+1] with arr[n-1], which are arr[0] and arr[0],
respectively. This does not change anything and places the pivot element (30) in its correct position (index 1)
and partitions the array into two subarrays: [10] and [].
We then recursively sort each subarray using quick sort. The left subarray [10] is already sorted as it has only
one element. The right subarray [] is empty and does not need to be sorted. We have now sorted the left
subarray of the original array.
28
We repeat the same process for the right subarray [90, 80] of the original array. We choose its last element as
the pivot (80) and partition it accordingly:
[80] [90]
The final value of i is -1 and j is n-1 =0. We swap arr[i+1] with arr[n-1], which are arr[0] and arr[0],
respectively. This does not change anything and places the pivot element (80) in its correct position (index 0)
and partitions the array into two subarrays: [] and [90].
We then recursively sort each subarray using quick sort. The left subarray [] is empty and does not need to be
sorted. The right subarray [90] is already sorted as it has only one element. We have now sorted the right
subarray of the original array.
We then concatenate the sorted left subarray [10, 30], the pivot element 40, and the sorted right subarray [80,
90] into a sorted array:
[10, 30, 40, 80, 90]
We have now sorted the original array using quick sort.
Here is a possible Python implementation of quick sort:
def quick_sort(arr):
# base case: if the array has one or zero elements, it is already sorted
if len(arr) <= 1:
return arr
# recursive case: choose the last element as the pivot and partition the array
accordingly
pivot = arr[-1]
i = -1 # index of smaller or equal elements
for j in range(len(arr) - 1): # loop through each element except for the pivot
if arr[j] <= pivot: # if the current element is smaller or equal to the pivot
i += 1 # increment i by one
arr[i], arr[j] = arr[j], arr[i] # swap the current element with arr[i]
# place the pivot element in its correct position by swapping it with arr[i+1]
arr[i+1], arr[-1] = arr[-1], arr[i+1]
# partition the array into two subarrays: one that contains all elements less than or
equal to the pivot, and one that contains all elements greater than the pivot
left = arr[:i+1]
right = arr[i+2:]
# recursively sort each subarray using quick sort and concatenate them with the pivot
element
return quick_sort(left) + [pivot] + quick_sort(right)
Breadth First Search: A graph traversal algorithm that starts traversing the graph from a given root node and
explores all the neighboring nodes. Then, it selects the nearest node and explores all the unexplored nodes.
The process is repeated until all the nodes are visited. The algorithm uses a queue data structure to store the
nodes that are to be visited and a set data structure to store the nodes that are already visited. It has a time
complexity of O(V + E), where V is the number of vertices and E is the number of edges in the graph. It has
a space complexity of O(V), where V is the number of vertices in the graph.
Example: Suppose we want to traverse the following graph using breadth first search, starting from node A:
We start by creating an empty queue Q and an empty set S. We then enqueue the root node A to Q and add it
to S. We then loop until Q is empty. In each iteration, we dequeue a node from Q and visit it. Then, we
enqueue all its unvisited neighbors to Q and add them to S. Here is how Q and S change for each iteration:
29
Iteration
1
2
3
4
5
6
7
Dequeued
Node
A
B
C
D
E
F
G
Visited
Node
A
B
C
D
E
F
G
Enqueued
Nodes
B, C
D, E
F
G
-
Q
S
[B, C]
[C, D, E]
[D, E, F]
[E, F, G]
[F, G]
[G]
[]
{A}
{A, B}
{A, B, C}
{A, B, C, D}
{A, B, C, D, E}
{A, B, C, D, E, F}
{A, B, C, D, E, F, G}
The loop ends when Q is empty. The order of visited nodes is: A -> B -> C -> D -> E -> F -> G.
Here is a possible Python implementation of breadth first search:
def bfs(graph, root):
# create an empty queue Q and an empty set S
Q = []
S = set()
# enqueue root to Q and add it to S
Q.append(root)
S.add(root)
# loop until Q is empty
while Q:
# dequeue a node from Q and visit it
node = Q.pop(0)
print(node)
# enqueue all unvisited neighbors of node to Q and add them to S
for neighbor in graph[node]:
if neighbor not in S:
Q.append(neighbor)
S.add(neighbor)
Depth First Search: A graph traversal algorithm that starts traversing the graph from a given root node and
explores as far as possible along each branch before backtracking. The algorithm uses a stack data structure
to store the nodes that are to be visited and a set data structure to store the nodes that are already visited. It
has a time complexity of O(V + E), where V is the number of vertices and E is the number of edges in the
graph. It has a space complexity of O(V), where V is the number of vertices in the graph.
Example: Suppose we want to traverse the following graph using depth first search, starting from node A:
We start by creating an empty stack S and an empty set V. We then push the root node A to S and add it to V.
We then loop until S is empty. In each iteration, we pop a node from S and visit it. Then, we push all its
unvisited neighbors to S and add them to V. Here is how S and V change for each iteration:
Iteration
1
2
3
4
5
6
Popped
Node
A
D
E
F
C
B
Visited
Node
A
D
E
F
C
B
Pushed
Nodes
B, C, D
E
F
-
S
V
[D, C, B]
[E, C, B]
[F, C, B]
[C, B]
[B]
[]
{A}
{A, D}
{A, D, E}
{A, D, E, F}
{A, D, E, F, C}
{A, D, E, F, C, B}
30
The loop ends when S is empty. The order of visited nodes is: A -> D -> E -> F -> C -> B.
Here is a possible Python implementation of depth first search:
def dfs(graph, root):
# create an empty stack S and an empty set V
S = []
V = set()
# push root to S and add it to V
S.append(root)
V.add(root)
# loop until S is empty
while S:
# pop a node from S and visit it
node = S.pop()
print(node)
# push all unvisited neighbors of node to S and add them to V
for neighbor in graph[node]:
if neighbor not in V:
S.append(neighbor)
V.add(neighbor)
Prim’s Algorithm: A minimum spanning tree algorithm that finds a subset of the edges of a weighted
undirected graph that forms a tree that includes every vertex and has the minimum sum of weights among all
the trees that can be formed from the graph. It is based on the greedy approach, where at each step, it chooses
the cheapest edge that connects a vertex in the tree to a vertex not in the tree. The algorithm starts with an
arbitrary vertex as the root of the tree and grows the tree by adding one edge at a time until all vertices are
included. The algorithm uses a priority queue data structure to store the vertices that are not yet in the tree and
their cheapest costs of connection to the tree. It has a time complexity of O(E log V), where E is the number
of edges and V is the number of vertices in the graph. It has a space complexity of O(V), where V is the
number of vertices in the graph.
Example: Suppose we want to find the minimum spanning tree of the following graph using Prim’s algorithm,
starting from vertex A: We start by creating an empty tree T and a priority queue Q. We then enqueue the root
vertex A to Q with a cost of 0 and mark it as visited. We then loop until Q is empty. In each iteration, we
dequeue a vertex from Q with the minimum cost and add it to T. Then, we enqueue all its unvisited neighbors
to Q with their costs of connection to T and mark them as visited. Here is how Q and T change for each
iteration:
Iteration
1
2
3
4
5
6
7
Dequeued
Vertex
A
B
C
D
E
F
G
Cost
0
2
3
3
4
5
6
Added
Vertex
A
B
C
D
E
F
G
Added
Edge
(A, B)
(A, C)
(A, D)
(B, E)
(E, F)
(D, G)
Q
T
[(B, 2), (C, 3), (D, 3)]
[(C, 3), (D, 3), (E, 4)]
[(D, 3), (E, 4), (F, 7)]
[(E, 4), (F, 7), (G, 6)]
[(F, 5), (G, 6)]
[(G, 6)]
[]
{A}
{A, B}
{A, B, C}
{A, B, C, D}
{A, B, C, D, E}
{A, B, C, D, E, F}
{A, B, C, D, E, F, G}
The loop ends when Q is empty. The minimum spanning tree T contains the following edges: (A, B), (A, C),
(A, D), (B, E), (E, F), (D, G). The total weight of the tree is 2 + 3 + 3 + 4 + 5 + 6 = 23.
Here is a possible Python implementation of Prim’s algorithm:
31
import heapq # to use a priority queue
def prim(graph, root):
# create an empty tree T and a priority queue Q
T = set()
Q = []
# enqueue root to Q with a cost of 0 and mark it as visited
heapq.heappush(Q, (0, root))
visited = set()
visited.add(root)
# loop until Q is empty
while Q:
# dequeue a vertex from Q with the minimum cost and add it to T
cost, node = heapq.heappop(Q)
T.add(node)
# enqueue all unvisited neighbors of node to Q with their costs of connection to T
and mark them as visited
for neighbor, weight in graph[node]:
if neighbor not in visited:
heapq.heappush(Q, (weight, neighbor))
visited.add(neighbor)
# return T as the minimum spanning tree
return T
Kruskal’s Algorithm: A minimum spanning tree algorithm that finds a subset of the edges of a weighted
undirected graph that forms a tree that includes every vertex and has the minimum sum of weights among all
the trees that can be formed from the graph. It is based on the greedy approach, where at each step, it chooses
the cheapest edge that does not form a cycle with the edges already in the tree. The algorithm starts with an
empty tree and sorts all the edges by their weights. Then, it iterates over the sorted edges and adds them to the
tree if they do not create a cycle. The algorithm uses a disjoint-set data structure to keep track of which vertices
are in which connected components of the tree and to efficiently check for cycles. It has a time complexity of
O(E log E), where E is the number of edges in the graph. It has a space complexity of O(E + V), where V is
the number of vertices in the graph.
Example: Suppose we want to find the minimum spanning tree of the following graph using Kruskal’s
algorithm: We start by creating an empty tree T and sorting all the edges by their weights in ascending order.
Here is the sorted list of edges:
Edge
(A, B)
(A, C)
(A, D)
(B, E)
(E, F)
(D, G)
(C, F)
(A, G)
(B, C)
(E, G)
(D, E)
Weight
2
3
3
4
5
6
7
8
8
9
9
(B,G)
(F,G)
(C,D)
10
11
11
We then loop over the sorted edges and add them to T if they do not form a cycle with the edges already in
T. We use a disjoint-set data structure to maintain the connected components of T and to check for cycles.
Here is how T and the disjoint-set change for each iteration:
Iteration
Edge
Weight
Added to T?
Cycle?
Iteration
1
(A, B)
2
Yes
No
1
2
(A, C)
3
Yes
No
2
3
(A, D)
3
Yes
No
3
4
(B, E)
4
Yes
No
4
5
(E, F)
5
Yes
No
5
6
(D, G)
6
Yes
No
6
T
Disjoint-set
{(A, B)}
{A}, {B}, {C}, {D}, {E}, {F}, {G} -> {A, B}, {C}, {D}, {E}, {F}, {G}
32
{(A, B), (A, C)}
{A, B}, {C}, {D}, {E}, {F}, {G} -> {A, B, C}, {D}, {E}, {F}, {G}
{(A, B), (A, C), (A, D)}
{A, B, C}, {D}, {E}, {F}, {G} -> {A, B, C, D}, {E}, {F}, {G}
{(A, B), (A, C), (A, D), (B, E)}
{A, B, C, D}, {E}, {F}, {G} -> {A, B, C, D, E}, {F}, {G}
{(A, B), (A, C), (A, D), (B, E), (E, F)}
{A, B, C, D, E}, {F}, {G} -> {A, B, C, D, E, F}, {G}
{(A, B), (A, C), (A, D), (B, E), (E, F), (D, G)}
{A, B, C, D, E, F}, {G} -> {A, B, C, D, E, F, G}
The loop ends when all the edges are processed. The minimum spanning tree T contains the following edges:
(A, B), (A, C), (A, D), (B, E), (E, F), (D, G). The total weight of the tree is 2 + 3 + 3 + 4 + 5 + 6 = 23.
Here is a possible Python implementation of Kruskal’s algorithm:
import heapq # to use a priority queue
# a class to represent a disjoint-set data structure
class DisjointSet:
def __init__(self, n):
# initialize the parent and rank arrays
self.parent = list(range(n))
self.rank = [0] * n
# find the representative of x using path compression
def find(self, x):
if self.parent[x] != x:
self.parent[x] = self.find(self.parent[x])
return self.parent[x]
# merge the sets containing x and y using union by rank
def union(self, x, y):
xroot = self.find(x)
yroot = self.find(y)
if xroot == yroot:
return
if self.rank[xroot] < self.rank[yroot]:
self.parent[xroot] = yroot
elif self.rank[xroot] > self.rank[yroot]:
self.parent[yroot] = xroot
else:
self.parent[yroot] = xroot
self.rank[xroot] += 1
def kruskal(graph):
# create an empty tree T and a priority queue Q
T = set()
Q = []
# enqueue all the edges to Q with their weights
for u in graph:
for v, w in graph[u]:
heapq.heappush(Q, (w, u, v))
# create a disjoint-set data structure for the vertices
ds = DisjointSet(len(graph))
# loop until Q is empty or T has n-1 edges
while Q and len(T) < len(graph) - 1:
# dequeue an edge with the minimum weight from Q
w, u, v = heapq.heappop(Q)
# if the edge does not form a cycle with T, add it to T
if ds.find(u) != ds.find(v):
T.add((u, v))
ds.union(u, v)
# return T as the minimum spanning tree
return T
33
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Bubble sort: We use bubble sort to sort an array by repeatedly swapping adjacent elements that are out of
order. It is simple to implement but inefficient for large arrays.
Insertion sort: We use insertion sort to sort an array by inserting each element into its correct position in a
sorted subarray. It is efficient for small or nearly sorted arrays.
Selection sort: We use selection sort to sort an array by finding the smallest element in the unsorted
subarray and swapping it with the first element. It is easy to implement but has a high number of
comparisons.
Counting sort: We use counting sort to sort an array of integers in a given range by counting the frequency
of each element and then placing them in their correct positions. It is fast and stable but requires extra
space and only works for integers.
Linear search: We use linear search to find an element in an array by checking each element sequentially
until we find a match or reach the end. It is simple but slow for large arrays.
Binary search: We use binary search to find an element in a sorted array by repeatedly dividing the array
into two halves and comparing the middle element with the target. It is fast and efficient but requires the
array to be sorted.
Quick sort: We use quick sort to sort an array by choosing a pivot element and partitioning the array into
two subarrays such that all elements less than the pivot are in the left subarray and all elements greater
than or equal to the pivot are in the right subarray. Then we recursively sort the subarrays. It is fast and
widely used but has a worst-case complexity of O(n^2) and is not stable.
Merge sort: We use merge sort to sort an array by dividing it into two halves, sorting each half recursively,
and then merging the two sorted halves. It is stable and has a guaranteed complexity of O(n log n) but
requires extra space.
Maximum subarray problem: We use this problem to find a contiguous subarray of an array that has the
largest sum. It can be solved using various methods such as brute force, divide and conquer, dynamic
programming, or Kadane’s algorithm. It is useful for applications such as image processing, data
compression, and stock market analysis.
Binary tree: We use binary tree to represent hierarchical data structures where each node has at most two
children. It can be used for various purposes such as searching, sorting, traversal, expression evaluation,
compression, etc.
Heap sort: We use heap sort to sort an array by building a heap data structure from the array and then
repeatedly extracting the maximum (or minimum) element from the heap and placing it at the end (or
beginning) of the sorted subarray. It is fast and in-place but not stable.
Binary search tree: We use binary search tree to store data that can be compared and searched efficiently.
It is a binary tree where each node’s value is greater than or equal to all values in its left subtree and less
than all values in its right subtree. It can support operations such as insertion, deletion, search, minimum,
maximum, predecessor, successor, etc.
Breadth first search: We use breadth first search to traverse a graph or a tree by visiting all nodes at a given
distance from the source node before moving to the next distance level. It can be used for finding the
shortest path, testing connectivity, finding cycles, etc.
Depth first search: We use depth first search to traverse a graph or a tree by visiting all nodes along a path
from the source node before backtracking and exploring other paths. It can be used for finding connected
components, topological sorting, detecting cycles, etc.
Prim’s algorithm: We use Prim’s algorithm to find a minimum spanning tree of a weighted undirected
graph. It starts with an arbitrary node and grows the tree by adding the edge with the smallest weight that
connects a node in the tree with a node outside the tree. It is greedy and efficient but requires a priority
queue data structure.
Kruskal’s algorithm: We use Kruskal’s algorithm to find a minimum spanning tree of a weighted
undirected graph. It sorts all edges by their weights and adds them one by one to the tree if they do not
create a cycle. It is greedy and simple but requires a disjoint set data structure.
34
Download