Cohort 3.0 Complete Notes Harkirat Singh
C++ is a compiler language first it converts the code in the binary and then runs into your machine. the compilation takes time, but once it's completed, it is swift after the deployment. In the case of production, it is well-suited.
JS is an interpreted language that is executed line by line. It doesn't need compilation. It is dynamically typed.
This means the variable in JavaScript is not bound to a specific data type. Types are determined at runtime and can change when the program is executed. JavaScript code is executed in a single thread.
Memory Management is a process of storing variable data in RAM. The Garbage Collector manages memory in JavaScript.
Java and JavaScript use a Garbage Collector. The single-threaded nature makes the app not scalable
The downside of JavaScript:-
Runtime Error
Performance Overhead
File I/O Heavy Operations
Filo I/O refers to a computer program that transfers much data between the program and external systems or devices. These operations usually require waiting for data to be read from or written to sources like disks, networks, databases, or other external devices, which can be time-consuming compared to in-memory computations.
Examples of I/O Heavy Operations:
-
Reading a file
-
Starting a clock
-
HTTP Requests
Callback Function
Callback Functions are the functions that are passed as an argument to another function.
What are Classes in JavaScript?
Classes in JavaScript are a blueprint for creating objects with predefined properties and methods.
Defining a Class
To define a class in JavaScript, you use the class keyword followed by the class name. Inside the class, we define a constructor method, a special method for creating and initialising an object created with a class. You can also define other methods that belong to the class.
javascriptclass Person { constructor(name, age) { this.name = name; this.age = age; } greet() { console.log(`Hello, my name is ${this.name} and I am ${this.age} years old.`); } }
In the example above, we have defined a Person class with a constructor that takes name and age as parameters. The greet method is a regular method that prints a greeting message.
Creating an Instance
To create an instance of a class, you use the new keyword followed by the class name and pass any required arguments to the constructor.
javascriptconst person1 = new Person('Alice', 30); person1.greet(); // Output: Hello, my name is Alice and I am 30 years old.
Async Await
What is DOM?
DOM stands for Document Object Model. It represents the structure of a web page as a tree of objects.
Why do we need DOM?
The DOM is used to represent the structure of a webpage as a tree of objects, allowing scripts to manipulate the content dynamically.
What is Static HTML?
A Static HTML page is a web document that remains unchanged over time. Unlike dynamic web pages, which can update and display new content based on user interactions or other factors, a static HTML page displays the same information to every visitor.
It is similar to a digital signboard that consistently shows the same message without any alterations. This type of web page is often used for content that does not need to be updated frequently, such as informational pages, company profiles, or contact information.
What is Dynamic HTML?
Dynamic HTML is a web page document where the HTML and its content can change dynamically based on user interactions or other events. This means that the content of the web page is not fixed and can be updated in real time without needing to reload the entire page.
For instance, when you click a button to add a task to a to-do list, the new task appears immediately on the page. This dynamic behaviour is often achieved using JavaScript, which can manipulate the DOM to update the content, styles, and structure of the web page on the fly.
Fetching Elements in DOM
There are 5 Popular methods in which we can fetch our Elements from HTML documents.
-
querySelector
-
querySelectorAll
-
getElementById
-
getElementByClassName
-
getElementsByClassName
For Example -
xml<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Document</title> </head> <body> <h1>To Do List</h1> <h4 id="task1">1. Go to Gym</h4> <h4 id="task2">2. Take Class</h4> <h4 id="task3">3 Go</h4> <input type="text" placeholder="Type your Task"></input> <button onclick="AddTask()">Add Task</button> </body> <script> var input=document.querySelector("input") // used to get the value inside the input box console.log(input.value); // Selecting Elements using ID let task3=document.querySelector("#task3") console.log(task3); </script> </html>
State-Derived Frontend
The concepts of components and state were introduced to simplify our code and make it more maintainable. Components are reusable, self-contained pieces of UI that can be combined to build complex interfaces.
Each component can manage its own state, which represents the data or properties that control its behaviour and appearance. For example, consider a to-do list application. Each task in the list can be represented as a component.
The application's state might include the list of tasks, the current input value, and any filters applied to the list. When a user adds a new task, the state is updated to include the new task, and the UI re-renders to display it.
We can build more modular, scalable, and interactive web applications using components and states.
Reconciliation
Reconciliation is identifying the differences between the old state and the new state of an application. This process is crucial in modern web development, especially when dealing with dynamic user interfaces that frequently change in response to user actions or data updates.
Node Js
Node.js is a powerful runtime environment that allows developers to execute JavaScript code outside of a web browser. Built on Chrome's V8 JavaScript engine, Node.js is designed to build scalable network applications.
It uses an event-driven, non-blocking I/O model, which makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.
What is HTTP Servers
HTTP Servers are essential components that enable communication between clients and servers over the Internet. When a client, such as a web browser, sends a request to access a webpage or resource, the HTTP server processes this request and returns the appropriate response. This response could be an HTML page, a file, or data in formats like JSON or XML.
The server runs various processes to handle these requests efficiently. It listens for incoming requests on specific ports, typically port 80 for HTTP and port 443 for HTTPS.
Upon receiving a request, the server parses it to understand what the client is asking for. It then routes the request to the appropriate handler, which might involve querying a database, executing server-side scripts, or fetching static files from the server's storage.
Why the HTTP Protocol?
The HTTP protocol is essential because it enables machines to communicate with each other over the Internet.
This communication is the backbone of web interactions, allowing browsers to request and receive web pages from servers.
When you type a URL into your browser, an HTTP request is sent to the server hosting the website.
The server then processes this request and returns the appropriate response, which your browser displays as a web page.
What is an IP Address
An IP address, or Internet Protocol address, is a unique identifier assigned to each device connected to a network that uses the Internet Protocol for communication.
Think of it as the digital address of your computer, smartphone, or any other device that connects to the internet.
Just like your home address allows mail to be delivered to your house, an IP address ensures that data sent over the internet reaches the correct destination.
Use the command to check the IP address of any website.
bashping website.com
Domain
A domain is a human-readable name mapped to a server's IP address. When someone types the domain name into their browser, the domain name system (DNS) translates this name into the corresponding IP address of the server hosting the website. This process allows users to access websites without remembering complex numerical IP addresses.
For example, when you type "example.com" into your browser, the DNS resolves this domain name to the server's IP address where the website's content is stored. This seamless translation is what makes navigating the internet user-friendly and efficient.
Headers, Query params in Express
What is Headers
Headers are key-value pairs exchanged between the client and the server during HTTP requests and responses. They carry additional metadata that can be used for various purposes, such as authentication, specifying content types, caching policies, and more.
For instance, headers can include information about the type of content being sent (like JSON or HTML), the preferred language of the client, or the credentials needed to access a resource.
When a client makes a request to the server, it can include headers to provide context about the request. For example, the Authorization header might contain a token that the server uses to verify the client's identity.
Similarly, the Content-Type header tells the server what kind of data is being sent, so it knows how to process it.
How to Create Dynamic Endpoints in Express
Creating dynamic endpoints in Express is an essential skill for building flexible and robust web applications. Dynamic endpoints allow your application to handle various routes and parameters, making it more versatile and capable of responding to different user requests.
-
Set Up Your Express Application First, ensure you have Express installed in your project. If not, you can install it using npm:
bashnpm install expressThen, create a basic Express application:
javascriptconst express = require('express'); const app = express(); const port = 3000; app.listen(port, () => { console.log(`Server is running on port ${port}`); }); -
Define a Basic Route Start by defining a simple route to ensure your server is working correctly:
javascriptapp.get('/', (req, res) => { res.send('Hello, World!'); }); -
Create Dynamic Routes with Route Parameters Dynamic routes use parameters to capture values from the URL. For example, you can create a route that captures a user ID:
javascriptapp.get('/user/:id', (req, res) => { const userId = req.params.id; res.send(`User ID: ${userId}`); });In this example,
:idis a route parameter that captures the value provided in the URL.
Types of Routes in Express js
What is Middleware in Express
Middleware in Express is a function that runs during the request-response cycle. It can modify the request or response or end the process. Middleware functions handle tasks like logging, authentication, and error handling.
In simple terms, middleware functions are like a series of steps that a request goes through before getting a response. Each step can perform a specific task, making the process more organized and efficient.
javascriptconst express = require('express'); const app = express(); // Middleware function to modify request function modifyRequestMiddleware(req, res, next) { // Add a custom property to the request object req.customProperty = 'This was added by middleware'; // Log the original and modified request method and path console.log(`Original request URL: ${req.originalUrl}`); console.log(`Modified request custom property: ${req.customProperty}`); // Continue to the next middleware or route handler next(); } // Use the middleware with a specific route or all routes app.use(modifyRequestMiddleware); // Example route app.get('/', (req, res) => { res.send(`Hello, World! Custom Property: ${req.customProperty}`); }); // Start the server const port = 3000; app.listen(port, () => { console.log(`Server is running on port ${port}`); });
CORS Middleware - Cross-Origin Resource Sharing
Cross-Origin Resource Sharing (CORS) is a security feature used by web browsers to control how web pages can request resources from a different domain than the one they came from. This is important for keeping the web secure and private. CORS is especially important for web apps that need to talk to APIs on other domains.
For example, if your web app is on example.com and needs to get data from api.example.com, CORS rules will decide if this request is allowed. Without the right CORS setup, these requests might be blocked by the browser, stopping your app from getting the data it needs. So, knowing how to set up CORS correctly is crucial for developers making web apps that use external APIs, ensuring smooth and safe data sharing between different domains.
For Example :
Imagine you have a web application hosted on example.com, and it needs to fetch data from an API hosted on api.example.com. By default, web browsers block these cross-origin requests for security reasons. However, if api.example.com it wants to allow requests from example.com, it can do so by including specific HTTP headers in its response.
For instance, api.example.com it can include the Access-Control-Allow-Origin header in its response, specifying which domains are allowed to access its resources. If the header is set to Access-Control-Allow-Origin: *, it means any domain can access the resources. Alternatively, it can specify a particular domain like Access-Control-Allow-Origin: example.com, to allow only that domain to access the resources.
Why Use CORS?
When building web applications, you often need to make requests to APIs that are hosted on different domains. Without CORS, these requests would be blocked by the browser's same-origin policy. CORS allows you to specify which domains are permitted to access your resources, thereby enabling secure cross-domain communication.
Setting Up CORS in Express
To enable CORS in your Express application, you need to use the cors middleware. First, install the cors package using npm:
bashnpm install cors
Next, integrate the cors middleware into your Express application. Here’s how you can do it:
javascriptconst express = require('express'); const cors = require('cors'); const app = express(); const port = 3000; // Use the CORS middleware app.use(cors()); app.listen(port, () => { console.log(`Server is running on port ${port}`); });
How to Serve Files on IntraNet
Sometimes we need to share files within the same network. A common example is when we host the front end and back end on different ports on our local system. In such cases, we can use an npm library called serve, which allows us to share our files on the local network.
How to use Serve
To share files on the same network, or when hosting frontend and backend on different ports locally, you can use an npm library called serve. This allows you to share files on the local network.
Serve Installation
bashnpm i serve
How to Use Serve
Open the terminal in the folder you want to share, then run the command:
bashnpx serve
What is an Authentication Token
An authentication token is a piece of data that is used to verify the identity of a user or a system. It acts as a digital key that allows access to various resources and services.
Authentication tokens are commonly used in web applications to manage user sessions and ensure secure communication between the client and the server.
How Authentication Tokens Work
-
User Login: When users log in to an application, they provide their credentials, such as a username and password.
-
Token Generation: The server verifies the credentials and, if they are correct, generates an authentication token. This token is usually a long string of characters that is difficult to guess.
-
Token Storage: The token is sent back to the client, where it is stored, typically in local storage or a cookie.
-
Token Usage: For subsequent requests, the client includes the token in the request headers. The server then verifies the token to ensure the request is coming from an authenticated user.
-
Token Expiry: Tokens usually have an expiration time. After this time, the user will need to log in again to get a new token.
Types of Authentication Tokens
-
JWT (JSON Web Tokens): These are widely used tokens that contain encoded JSON objects. They are self-contained and can include user information and claims.
-
OAuth Tokens: These tokens are used in OAuth authentication, allowing third-party applications to access user data without exposing user credentials.
-
Session Tokens: These are simpler tokens used to maintain user sessions. They are often stored on the server side.
Benefits of Using Authentication Tokens
-
Security: Tokens help secure user data and ensure that only authenticated users can access certain resources.
-
Scalability: Tokens can be easily managed and validated, making them suitable for scalable applications.
Statelessness: Tokens allow for stateless authentication, meaning the server does not need to store session information, which can improve performance.
How to Create an Authentication in Express
Step 1. Initialise an npm project using npm init command.
Step 2. Install the express using npm i express.
Step 3. Create a basic Express app with two endpoints: signin and signup, both using the POST method.
javascriptconst express = require("express") const app = express() const port = 3000 app.use(express.json()) const userDetails = [] app.post("/signup", function (req, res) { const username = req.body.username const password = req.body.password userDetails.push({ username: username, password: password }) res.json({ msg: "You are succefully signed up" }) console.log(userDetails); }) app.post("/signin")
Step 4. Add middleware app.use(express.json()) to parse the JSON body.
Step 5. Define a sign-up endpoint with middleware that takes username and password from the body.
Step 6. Next, define a global variable that will store the user details in an array.
Step 7. In the sign-up route, push the username and password into the global variable that stores the users' data in JSON format.
Generating a Random Token for the Authentication Process
Step 1. Define a function that generates a random token for the authentication process.
Step 2. Define a token variable as an empty string.
Step 3. Next, define arrays consisting of numbers and alphabets to help generate a unique token.
Step 5. Run a loop through the arrays multiple times to generate a long token, then update the token variable and return it.
javascriptfunction genearateToken() { let token = "" let options = [ 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9' ]; for (let index = 0; index < 32; index++) { token = token + options[Math.floor(Math.random() * options.length)] } return token }
Generating a Token When Users Sign In to the App
Step 1. Go to the sign-in endpoint and define variables for the username and password, extracting their values from req.body.
Step 2. Check if the given username and password exist in the global variable using the find method and an if-else statement.
Step 3. If the username and password are valid, call the random token generator function, store the generated token in a variable, and push that token into the global variable.
Step 4. If the username and password are not valid, return a message stating that the username and password are not valid.
javascriptapp.post("/signin", function (req, res) { const username = req.body.username const password = req.body.password const user = userDetails.find((u) => { if (u.username == username && u.password == password) { const userToken = genearateToken() u.token = userToken res.json({ "username":username,"password":password,"Token": userToken }) } else { res.status(403).send({ msg: "Invalid Username and Password" }) } }) console.log(userDetails); })
Defining an Authentication Endpoint
Step 1. Define an authentication endpoint with the Get method.
Step 2. Define a variable in the end-points named user token that take value from the req.headers.token
Step 3. And then you have to check if that token in present in the global variable then you can send the actual username and password or else you can send an Invalid Token
Code Example
-
javascript
const express = require("express") const app = express() const port = 3000 app.use(express.json()) const userDetails = [] function genearateToken() { let token = "" let options = [ 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9' ]; for (let index = 0; index < 32; index++) { token = token + options[Math.floor(Math.random() * options.length)] } return token } app.get("/", function (req, res) { res.send("Hello World") }) app.post("/signup", function (req, res) { const username = req.body.username const password = req.body.password userDetails.push({ username: username, password: password }) res.json({ msg: "You are succefully signed up" }) console.log(userDetails); }) app.post("/signin", function (req, res) { const username = req.body.username const password = req.body.password const user = userDetails.find((u) => { if (u.username == username && u.password == password) { const userToken = genearateToken() u.token = userToken res.json({ "username": username, "password": password, "Token": userToken }) } else { res.status(403).send({ msg: "Invalid Username and Password" }) } }) console.log(userDetails); }) // Authentication End Point app.get("/user", function (req, res) { let userToken = req.headers.token let users = userDetails.find(user => user.token === userToken) console.log(users); if (users) { res.send({ "username": users.username, "password": users.password }) } else { res.send("Invalid Token") } console.log(userDetails); }) app.listen(port, () => { console.log('App is running at port', port); })
How to Perform Authentication Using the JWT Library
Authentication is a critical part of any application, ensuring that users are who they claim to be. In this guide, we will walk through setting up authentication using the JWT (JSON Web Token) library in a Node.js application.
JWT is a popular method for securing APIs, as it allows for stateless authentication, meaning the server does not need to store session information.
What is JWT (JSON Web Token)
JWT stands for JSON Web Token, which is mostly used for authentication and information exchange between the web application.
JWT is stateless, which means that we don’t have to store the session data all the data is stored in the token itself.
Why we use JWT Instead of Random Token
JWTs are stateless, meaning we don’t have to store them in a variable or database. The token itself contains the username and password.
In contrast, random tokens are stateful, meaning we need to store these tokens in a variable or database.
If the token is stateful, we have to send a request to the database every time to verify the token, which increases the database request time.
Step 1: Setting Up Your Project
First, create a new Node.js project and install the necessary dependencies. You will need Express to handle HTTP requests and JSON web tokens to generate and verify tokens.
bashmkdir jwt-authentication cd jwt-authentication npm init -y npm install express jsonwebtoken
Step 2: Creating the Express Server
Next, create a file named server.js and set up a basic Express server.
javascriptconst express = require('express'); const jwt = require('jsonwebtoken'); const app = express(); const port = 3000; app.get("/", function (req, res) { res.send("Hello World") }) app.listen(port, () => { console.log('App is running at port', port); });
Step 3: Add middleware to parse JSON and Define a global variable to store user details.
In this step, we need to define a global middleware to parse the JSON body from incoming requests. This middleware will allow us to easily access the data sent in the request body, headers, or query parameters
javascriptapp.use(express.json())
Next, we have to define a global variable that can store user details. Because in this we will not use any database.
javascriptapp.use(express.json()) const userDetails = []
Step 5. Define a Sign-up Route with POST method for user sign-up
Now, we need to create a sign-up route using the POST method so users can sign up. We define username and password variables to get the information from req.body.
After that, we store this data by pushing it in JSON format to our global array named userDetails. Then, we return a message indicating that the user signed up successfully.
javascriptapp.post("/signup", function (req, res) { const username = req.body.username const password = req.body.password userDetails.push({ username: username, password: password }) res.json({ msg: "You are succefully signed up" }) console.log(userDetails); })
Step 5. Define a Sign-in Route with POST method for user sign-in
The next endpoint will be the sign-in endpoint using the POST method. Here, we will check if the user enters the correct username and password, and then assign them a JWT token.
We will define two variables, username and password, get the values from req.body. We will check if the username and password are in our global variable.
If valid, we will create a new variable to convert the username into a JWT. We will call jwt.sign with two arguments: the username and JWT_SECRET. Finally, we will return the JSON data with the username, password, and JWT token.
javascriptapp.post("/signin", function (req, res) { const username = req.body.username const password = req.body.password const user = userDetails.find((u) => { if (u.username == username && u.password == password) { const userToken = jwt.sign({username:username},JWT_SECRET) res.json({ "username":username,"password":password,"Token": userToken }) } else { res.status(403).send({ msg: "Invalid Username and Password" }) } }) console.log(userDetails); })
Step 6. Define a user Route with the GET method for Authentication
Now, we will define a route for the authentication process. We will create a variable userToken to get the value from req.headers. Then, we will define a variable to verify the user token by calling jwt.verify with two arguments: the JWT token and the secret key.
Once we get the actual username, we will check if the user is in our global variable. If the user is present, we will return the username and password. Otherwise, we will return a response saying "Invalid Token".
javascriptapp.get("/user",function(req,res){ let userToken=req.headers.token const decordeInformation=jwt.verify(userToken,JWT_SECRET) //Convert JWT into json const username=decordeInformation.username let users=userDetails.find(user=>user.username===username) console.log(users); if (users){ res.send({"username":users.username,"password":users.password}) } else{ res.send("Invalid Token") } console.log(userDetails); })
Conclusion
The authentication process using JSON Web Tokens (JWT) is now complete. We've set up a route to handle user authentication by verifying the provided token and returning the appropriate user details.
Here's a detailed breakdown of the steps involved:
-
Define the Route: We created a route using the GET method to handle authentication requests. This route is defined at the endpoint
/user. -
Extract the Token: Within the route handler, we extract the token from the request headers. This token is expected to be provided by the client in the
req.headers.tokenfield. -
Verify the Token: We use the
jwt.verifymethod to decode and verify the token. This method takes two arguments: the token itself and the secret key (JWT_SECRET). The result of this verification is stored in thedecordeInformationvariable, which contains the decoded information from the token. -
Retrieve the Username: From the decoded information, we extract the username. This is done by accessing the
usernameproperty of thedecordeInformationobject. -
Find the User: We then search for the user in our global
userDetailsarray. This is done using thefindmethod, which checks if any user in the array has a username that matches the extracted username. -
Return the Response:
-
If the user is found, we send a response containing the user's username and password.
-
If the user is not found, we send a response indicating that the token is invalid.
-
Here's the complete code for the route:
javascriptapp.get("/user", function(req, res) { let userToken = req.headers.token; const decordeInformation = jwt.verify(userToken, JWT_SECRET); // Convert JWT into JSON const username = decordeInformation.username; let users = userDetails.find(user => user.username === username); console.log(users); if (users) { res.send({"username": users.username, "password": users.password}); } else { res.send("Invalid Token"); } console.log(userDetails); });
By following these steps, we ensure that only authenticated users can access their details, providing a secure way to handle user authentication in our application.
How to Connect Frontend and Backend
Week 7 Day 1
What is a Database?
A database is an organized collection of data that is stored and accessed electronically. Databases are designed to manage large amounts of information by storing, retrieving, and managing data efficiently.
They are essential for various applications, from simple data storage to complex data analysis and transaction processing.
What is NoSQL Databases
NoSQL Databases are a broad category of databases that diverge from the traditional relational model used in SQL Databases.
They are designed to handle a variety of data models and workloads that may not fit neatly into the tabular schema of relational databases.
Main Advantage of NoSQL Databases
There are mainly two advantages of using NoSQL Databases
-
Schema Flexibility: NoSQL Databases like MongoDB provide Schema flexibility, which means that we can store that in our database which doesn’t have any fixed structure or formats.
-
Scalability: Many NoSQL databases are designed to scale out horizontally, making it easier to distribute data across multiple servers and handle large volumes of traffic.
What is Schema
Schema provides a detailed blueprint and a comprehensive idea of how your database will be structured. It represents the logical organization and storage of data within the database.
Essentially, a schema outlines the way data is organized into tables, the relationships between these tables, and the constraints that govern the data.
What is the Advantage of Using NoSQL Databases?
NoSQL databases offer several advantages over traditional relational databases, making them popular for many modern applications. Here are some key benefits:
-
Scalability: NoSQL databases are designed to scale by distributing data across multiple servers. This horizontal scaling allows them to handle large volumes of data and high traffic loads more efficiently than traditional relational databases, which typically scale up by adding more powerful hardware.
-
Flexibility: Unlike relational databases that require a fixed schema, NoSQL databases offer flexible data models. This means you can store unstructured, semi-structured, or structured data without needing to define a rigid schema upfront. This flexibility is particularly useful for applications that deal with diverse data types or rapidly changing data structures.
-
Performance: NoSQL databases are optimized for specific data models and access patterns, which can result in faster read and write operations. For example, key-value stores are highly efficient for simple lookups, while document stores excel at managing hierarchical data.
-
Availability: Many NoSQL databases are designed with high availability and fault tolerance in mind. They often use replication and distributed architecture to ensure that the system remains operational even if some nodes fail. This makes NoSQL databases a reliable choice for applications that require continuous uptime.
-
Cost-Effectiveness: By leveraging commodity hardware and open-source software, NoSQL databases can be more cost-effective than traditional relational databases. The ability to scale out using inexpensive servers reduces the overall cost of ownership.
-
Handling Big Data: NoSQL databases are well-suited for big data applications that involve large volumes of data generated at high velocity. They can efficiently process and store massive datasets, making them ideal for use cases such as real-time analytics, IoT data storage, and social media data management.
-
Support for Modern Applications: NoSQL databases are designed to meet the needs of modern applications, such as mobile apps, web apps, and cloud-based services. They provide features like automatic sharding, replication, and eventual consistency, which are essential for building scalable and resilient applications.
What is MongoDB
MongoDB is a NoSQL database that uses a document-oriented approach to data storage. Unlike traditional relational databases, MongoDB stores data in flexible, JSON-like documents.
These documents can have nested structures and varied fields, allowing for a more dynamic and adaptable data model. This flexibility makes it easier to handle complex data types and structures without the need for predefined schemas.
MongoDB's ability to store data in this way enables developers to build applications that can evolve, adding new fields and structures as needed without requiring major changes to the database schema. This makes MongoDB an excellent choice for applications that require rapid development and iteration.
What is MongoDB Clusters
A MongoDB Cluster is a group of servers that work together to store and manage a collection of data. In a cluster, data is distributed across multiple machines, which helps to improve performance, scalability, and reliability.
Each machine in the cluster is known as a node, and these nodes can be spread across different geographic locations to ensure high availability and fault tolerance.
Within a MongoDB Cluster, data is stored in collections, which are similar to tables in relational databases. These collections contain documents, which are the basic units of data in MongoDB.
Documents are stored in a flexible, JSON-like format, allowing for nested structures and varied fields. This flexibility makes it easier to handle complex data types and structures without the need for predefined schemas.
Using Mongoose for MongoDB
To use the MongoDB database effectively, we will utilize the Mongoose library. Mongoose is an Object Data Modeling (ODM) library for MongoDB and Node.js. It provides a straightforward, schema-based solution to model your application data.
With Mongoose, you can define schemas for your database, which helps structure and validate the data you store.
By defining schemas, Mongoose allows you to enforce a specific structure on the documents within a collection. This means you can specify the types of data each field should contain, set default values, and even create custom validation rules.
This added layer of structure and validation can be incredibly useful for maintaining data integrity and consistency across your application.
Create Schema With Mongoose
Step 1. First Install the Mongoose library using the command given below.
bashnpm i mongoose
Step 2. Now, create a database file with the js extension.
Step 3. The first step is to, import the Mongoose library to a variable.
javascriptconst mongoose=require("mongoose")
Step 3. Then, You have to import schema and objectId from the Mongoose like this
javascriptconst Schema=mongoose.Schema const ObjectId=mongoose.ObjectId
Step 4. Now, define a variable that calls the new Schema constructor and inside it, specify the data type for each field.
javascriptconst user=new Schema({ name:String, email:String, password:String })
Step 5. Then, you have to create a model using a model function inside the Mongoose library
javascriptconst usermodel=mongoose.model("users",user)
Step 6. Now, export this model so that we can use it on our backend.
javascriptmodule.exports={usermodel:usermodel}
Connecting MongoDB Database With Backend
To connect your MongoDB database with your backend application, follow these detailed steps:
To connect to your MongoDB database, use the mongoose.connect method. You typically do this in your main server file (e.g., app.js or server.js). Here is an example of how to connect to a MongoDB database:
javascriptconst mongoose = require("mongoose"); const dataBase=mongoose.connect('mongodb://localhost:27017/yourDatabaseName')
Replace 'mongodb://localhost:27017/yourDatabaseName' with the actual URI of your MongoDB database.
Then you also have to import the database schema
javascriptconst { usermodel, Todomodel } = require("./db")
Example of Inserting Data into MongoDB Database via Request
Here is an example of inserting a record through an API request:
javascriptapp.post("/signup", async function (req, res) { const name = req.body.name const password = req.body.password const email = req.body.email try { let userData = await usermodel.create({ "name": name, "email": email, "password": password }) await userData.save() } catch (error) { res.status(403).json("Some error has been Occured") } res.json({ msg: "Send Successfully" }) })
Example of Finding Records in MongoDB
javascriptapp.post("/signin", async function (req, res) { const password = req.body.password const email = req.body.email const user = await usermodel.findOne({ email: email, password: password }) if (user) { const token = jwt.sign({ id: user._id }, JWT_Secret) res.json({ "Token": token }) } else { res.status(403).json({ message: "Incorrect Credentials" }) } })
Conclusion
In conclusion, the MongoDB section of this article explores NoSQL databases, highlighting their benefits like schema flexibility and scalability. It introduces MongoDB as a document-oriented database that stores data in a flexible, JSON-like format, suitable for handling complex and evolving data structures.
The section also explains MongoDB clusters, which improve performance, scalability, and reliability by distributing data across multiple nodes. This guide is a valuable resource for developers looking to use MongoDB for their data storage and management needs.
Storing Passwords in the Database with Hashing
Week 7 Day 2
What is Hashing?
Hashing is a process used in computer science and cryptography to transform data into a fixed-size string of characters, which is typically a hash code.
This transformation is accomplished using a hash function, which takes an input (or 'message') and returns a unique hash value. The primary purpose of hashing is to enable fast data retrieval and to ensure data integrity.
Why we should use Hashing
Password hashing is a special technique used to securely store passwords, making them hard to misuse. Instead of storing passwords directly in our database, we should convert them to hashes and then store them.
What is Salt?
A randomly generated value is added to the password before the hashing. This prevents attackers from using precomputed tables (rainbow tables) to crack passwords.
What is Salting?
Salting is a technique used with hashing to make passwords stronger. If two users have the same password, the hash will be the same, making it easy for an attacker to crack. Salting adds randomly generated text to the password and then converts it in the hash, so each hash value differs.
B-Crypt
B-Crypt is a popular password hashing function designed to be computationally intensive to resist brute-force attacks.
It incorporates a salt to protect against rainbow table attacks and includes a work factor, which determines how slow the hashing process will be.
This work factor can be adjusted to make the hashing process more time-consuming, thereby increasing the security of the hashed passwords.
How to use the Bcrypt Library
Convert Password with Hashing
Step 1. First, download the Bcrypt library using the npm command shown below.
javascriptnpm i bcrypt
Step 2. Next, you need to import bcrypt into your project.
Step 3. Then, convert the user's password using the hash function provided by the bcrypt library.
javascriptbcrypt.hash(myPlaintextPassword, saltRounds, function(err, hash) { // Store hash in your password DB. }); ///Example const hashPassword = await bcrypt.hash(password, 8)
Step 4. Now, save the hashed password in the database.
Hash Password Verification with Bycrypt
Once the user sends you the username and password to the signup route, you need to retrieve it from the request header and store it in a variable.
Next, you must check if the user's email is in your database. For example:
javascriptapp.post("/signin", async function (req, res) { const email = req.body.email; const password = req.body.password; const response = await UserModel.findOne({ email: email, });
If the user's email is not in your database, you need to send a response indicating that the email is not found.
javascriptif(!response){ res.status(403) .json({ Message:"User Not Found" }) }
But if the user's email is present, you need to check if the password is correct. The question is, how do you check this since the password stored in the database is hashed, and the password the user sends is in plain text?
To do this, you use the compare function from the bcrypt library. It takes two arguments: the first is the password entered by the user, and the second is the hashed password stored in your database, which you can retrieve with a database call.
javascriptconst passwordMatch= await bcrypt.compare(password,response.password)
The compare function will return true or false. If the password is correct, you can assign a JWT token. If the password doesn't match, you can return an error message.
javascriptif (passwordMatch) { const token = jwt.sign({ id: response._id.toString() }, JWT_SECRET); res.json({ token }) } else { res.status(403).json({ message: "Incorrect creds" }) }
Adding Schema Validation Using Zod
When building applications, ensuring that the data you receive and process is valid and structured correctly is crucial.
This is where schema validation comes into play. By using a library like Zod, we can define schemas that your data must adhere to, providing a robust way to validate incoming data.
Why Use Zod for Schema Validation?
Zod is a TypeScript-first schema declaration and validation library. It allows you to define the shape of your data and automatically validate it against this schema.
This helps catch errors early in the development process and ensures that your application handles data consistently.
Setting Up Zod
To get started with Zod, you first need to install it in your project. You can do this using npm or yarn:
bashnpm install zod
Step 1. Import the z variable from the zod module.
javascriptconst {z}=require("zod")
Step 2. We are adding it for the signup details validation, so we will define a variable under the signup routes and use the z.object() function from Zod to create a Zod object.
javascriptconst requiredBody=z.object({})
Step 3. In the next step, we will define some keys and their validation rules inside the object method like this:
javascriptconst requiredBody=z.object({ //check for email shoud be string email:z.string(), name:z.string(), //check for password shoud string with max and min lenght password:z.string().min(6).max(8) })
Step 4.Then we need to parse the data, so we will use the .safeParse method from the Zod library, which will pass this data to the request handler.
javascriptconst parsebody=requiredBody.safeParse(req.body)
Step 5. After you pass the data to the request handler, it will return two things: success and error. If the validation is correct, thesuccess the key will be true. If not, the success key will be false, and the error will return an object with details about what caused the error and if success is true then data will return an object
javascriptconst requiredBody=z.object({ email:z.string(), name:z.string(), // adding check password should be string with mininum of 6 and maximum of 8 character. password:z.string().min(6).max(8) }) const parsebody=requiredBody.safeParse(req.body) if(!parsebody.success){ res.json({ "Error":"Invalid Format", "Exact Error":parsebody.error }) return } //To Access the data const { email, password, firstName, lastName } = validateData.data
Week 9 React Basics
Why do we need React?
React is a powerful JavaScript library that significantly enhances the process of building front-end applications. It streamlines the development of user interfaces by simplifying the way we write and manage HTML, CSS, and JavaScript code.
With React, developers can use a special syntax called JSX, which allows them to write HTML-like code within JavaScript.
This JSX is then transformed into standard HTML, CSS, and JavaScript, making it easier to create dynamic and interactive web applications.
One of the key advantages of using React is its component-based architecture. This approach enables developers to break down complex user interfaces into smaller, reusable components.
Each component manages its own state and renders independently, which promotes modularity and code reusability. This makes it easier to maintain and update applications as they grow in size and complexity.
To convert React code into a format that browsers can understand, we use the command npm run build. This command compiles the entire application, transforming the JSX and other React-specific code into plain HTML, CSS, and JavaScript files.
These files can then be deployed to a web server, allowing users to access the application through their web browsers. This build process ensures that the application runs efficiently and is optimized for performance.
Why do we use React?
When using DOM manipulation, it's very hard to build large-scale applications. Before React and other frameworks, there were popular libraries like jQuery and Backbone.js that made DOM manipulation easier.
However, the problem wasn't fully solved because building big applications was still difficult to maintain, and developers had to write a lot of code, which is much less with frameworks like React.
What is State in React?
State in React is a JavaScript object representing the application's current status or condition at any given time. It holds information about the app's dynamic parts, which are the elements that can change over time as users interact with the application.
For example, the state can track user inputs, form data, or the results of API calls. By managing the state effectively, React components can update and render themselves automatically whenever the state changes, ensuring the user interface remains in sync with the underlying data.
What is a Component in React?
A component in React is a reusable piece of code that serves as a building block for the user interface. It is designed to take in data, known as "state" or "props," and use this data to render a specific part of the UI.
Components can be as simple as a button or as complex as an entire form. By breaking down the UI into smaller, manageable components, developers can create more organized and maintainable code.
Each component can manage its state and lifecycle, allowing for dynamic and interactive applications. Components can also be nested within each other, enabling the creation of complex interfaces by combining simpler elements.
What is Re-Rendering in React?
A re-render in React refers to the process where the Document Object Model (DOM) is updated to reflect changes in a component's state or props.
When the state of a component changes, React automatically triggers a re-render to ensure that the user interface accurately represents the current data.
This involves recalculating the component's output and updating the DOM with any differences. Re-rendering is a crucial part of React's efficient update mechanism, allowing the UI to stay in sync with the application's data without needing to reload the entire page.
What is JSX?
JSX stands for JavaScript XML. It is a syntax extension that is most commonly used with React, a popular JavaScript library for building user interfaces. JSX allows developers to write HTML-like code directly within JavaScript files.
This capability makes it easier to create and manage the user interface in React applications by allowing developers to visually structure their UI components in a way that resembles HTML. With JSX, you can seamlessly integrate HTML tags with JavaScript logic, making your code more readable and maintainable.
This integration simplifies the process of designing complex user interfaces by enabling developers to use familiar HTML syntax while leveraging the power of JavaScript to handle dynamic data and interactions.
Additionally, JSX helps in catching errors early during compilation, as it provides a more structured way to define UI components. Overall, JSX is a powerful tool that enhances the development experience by bridging the gap between HTML and JavaScript in React applications.
What is useState in React?
useState is a hook provided by React that allows you to add state management to your functional components. Before hooks, state management was primarily handled in class components, but useState enables you to manage the state in a more concise and functional way.
When you call useState, it returns an array with two elements: the current state value and a function to update that state. This hook is particularly useful for managing the local state within a component, such as form inputs, toggles, or any other data that might change over time.
By using useState, you can easily track and update your component's state, ensuring that your user interface reflects the latest data. This makes it a fundamental tool for building interactive and dynamic applications in React.
What is useEffect in React ?
The useEffect hook in React is a powerful tool that allows you to perform side effects in your functional components. Side effects are operations that can affect other parts of your application or interact with external systems, such as fetching data from an API, subscribing to a data stream, or manually changing the DOM.
When you use useEffect, you can specify a function that React will run after the component renders. This function can contain any logic you need to execute as a side effect, and it can also return a cleanup function to tidy up resources when the component unmounts or before the effect runs again.
This cleanup is particularly useful for tasks like unsubscribing from a data stream or clearing timers. The useEffect hook takes two arguments: the effect function and an optional array of dependencies.
The dependencies array allows you to control when the effect should re-run. If you provide an empty array, the effect will only run once, similar to componentDidMount in-class components. If you include variables in the array, the effect will run whenever any of those variables change, mimicking componentDidUpdate.
Week 9.4
Children in React
In React, the concept of children refers to the elements or components that are nested within another component. This is a fundamental aspect of building React applications, as it allows developers to create complex user interfaces by composing components together.
The children prop is a special property that React automatically passes to every component, enabling it to access and render any nested elements. When you define a component, you can use the children prop to render the content that is placed between the opening and closing tags of that component.
This makes it easy to create reusable components that can display different content based on what is passed to them as children. For example, you might have a Card component that can wrap various types of content, such as text, images, or other components, depending on what is passed as children.
Code Example -
javascriptimport { Children, useState } from 'react' import reactLogo from './assets/react.svg' import viteLogo from '/vite.svg' import './App.css' function App() { const [count, setCount] = useState(0) return ( <> <Card> <div style={{color:'green',}}> What do you want to post <br /> <input type="text" /> </div> </Card> </> ) function Card({children}) { return <div style={{background:"white",borderRadius:"10",color:"black",padding:"10", margin:"10"}}> {children} </div> } } export default App
Class-based Components vs. Functional Components
In the world of React, developers have the option to create components using either class-based components or functional components. Understanding the differences between these two approaches is crucial for making informed decisions about which to use in various scenarios.
Class-based Components
Class-based components are the traditional way of writing React components. They are defined using ES6 classes and extend from React.Component. This type of component allows you to use lifecycle methods, which are special methods that get called at different points in a component's life, such as when it is mounted, updated, or unmounted. These lifecycle methods provide a structured way to handle side effects and manage component state over time.
For example, a class-based component might look like this:
jsxclass MyComponent extends React.Component { constructor(props) { super(props); this.state = { count: 0 }; } componentDidMount() { // Code to run after the component is mounted } render() { return ( <div> <p>Count: {this.state.count}</p> <button onClick={() => this.setState({ count: this.state.count + 1 })}> Increment </button> </div> ); } }
Functional Components
Functional components, on the other hand, are a more modern approach and have gained popularity with the introduction of React Hooks. These components are simply JavaScript functions that return JSX. They are often easier to read and write, especially for components that do not require complex logic or state management.
With Hooks like useState and useEffect, functional components can now manage state and side effects, which were previously possible only in class-based components.
Here is an example of a functional component using Hooks:
jsxfunction MyComponent() { const [count, setCount] = useState(0); useEffect(() => { // Code to run after the component is mounted }, []); return ( <div> <p>Count: {count}</p> <button onClick={() => setCount(count + 1)}> Increment </button> </div> ); }
Key Differences
-
Syntax: Class-based components use class syntax and require the
render()method, while functional components are plain functions that return JSX. -
State Management: Initially, only class-based components could manage the state, but with Hooks, functional components can also handle the state.
-
Lifecycle Methods: Class-based components have built-in lifecycle methods, whereas functional components use Hooks like
useEffectto achieve similar functionality. -
Readability and Simplicity: Functional components are generally considered more concise and easier to read, making them a preferred choice for many developers.
Life Cycle Events in React
Lifecycle events in React refer to specific points in a component's life when the component changes. These events help us manage tasks like data fetching, subscriptions, and cleaning up resources.
By leveraging these lifecycle events, developers can ensure that their components behave predictably and efficiently throughout their existence. For instance, when a component is first added to the DOM, developers might use these events to initiate data loading or establish connections.
Similarly, when a component is about to be removed, these events can be used to tidy up any ongoing processes or connections, ensuring that the application remains performant and free of unnecessary resource consumption.
Error Boundaries in React
Error Boundaries in React are an essential feature designed to enhance the stability and reliability of applications. They act as a safety net, preventing the entire application from crashing if an error occurs within a component.
This is particularly useful in large applications where a single component failure could otherwise lead to a poor user experience. Error Boundaries are implemented using class-based components.
They work by catching JavaScript errors anywhere in their child component tree, logging those errors, and displaying a fallback UI instead of the component tree that crashed. This ensures that users can continue interacting with other parts of the application without disruption.
Fragments in React
In React, each component is required to return a single parent element, which can contain multiple child elements within it. This requirement can sometimes lead to the need for unnecessary wrapper elements, which can clutter the DOM and make the code less clean.
To address this issue, React provides a feature called Fragments. Fragments allow developers to group multiple elements without adding extra nodes to the DOM. By using Fragments, we can return various child elements from a component without the need for an additional parent element, thereby keeping the DOM structure clean and efficient.
This is particularly useful when you want to maintain a simple and organized codebase while ensuring that your component structure remains logical and easy to manage.
Week 10 Day 1
Single Page Application in React
A Single Page Application (SPA) in React is a type of web application that loads a single HTML page and dynamically updates the content as the user interacts with the app.
Unlike traditional multi-page applications that require a full page reload for each new piece of content, SPAs offer a smoother and more seamless user experience by only loading the necessary resources and updating the page dynamically.
Routing in React
Routing in React is a crucial concept that allows developers to create a seamless navigation experience within a Single Page Application (SPA). By using routing, you can define multiple routes in your application, each corresponding to a different component or view. This enables users to navigate through various sections of the app without triggering a full page reload, thereby maintaining the fluidity and speed that SPAs are known for.
Steps to Setup Routes in React
Step 1. First Install the react-router-dom package in your project
Step 2. Then, import the BrowserRouter, Routes, Route from the react-router-dom package
javascriptimport { BrowserRouter, Routes, Route } from 'react-router-dom'
Step 3. Next, define your Route, place it inside the Routes, and then wrap the Routes with BrowserRouter.
The Route will take 2 props: the first is the path where you want to set it up, and the second is the element you want to set up.
Example -
javascriptimport { BrowserRouter, Routes, Route } from 'react-router-dom' import './App.css' function App() { return ( <> <BrowserRouter> <Routes> <Route path='/' element={<Allen/>} /> <Route path='/Class11' element={<Class11/>} /> <Route path='/Class12' element={<Class12/>} /> </Routes> </BrowserRouter> </> ) function Allen() { return ( <> <div>Welcome to the allen home page</div> </> ) } function Class11() { return ( <> <div>This is class 11th Program</div> </> ) } function Class12() { return ( <> <div>This is class 12th Program</div> </> ) } } export default App
Link Tag and useNavigate Hook in React
When we need to redirect a user to a new page within a React application, we have two primary methods at our disposal. The first method involves using the Link tag, which is a component provided by the react-router-dom package.
By using the Link tag, we can create navigation links that allow users to move between different routes defined in our application without causing a full page reload. This approach is efficient and maintains the single-page application experience.
The second method for redirecting users is by utilizing the useNavigate hook, also part of the react-router-dom package. This hook provides a programmatic way to navigate between routes. It is particularly useful when you need to redirect users in response to certain actions, such as form submissions or button clicks, where a Link tag might not be appropriate.
The useNavigate hook returns a function that can be called with the desired path, allowing for dynamic and conditional navigation based on the application's state or user interactions.
Routing with the help of link tag
javascriptimport { BrowserRouter, Routes, Route, Link} from 'react-router-dom' import './App.css' function App() { return ( <> <BrowserRouter> <div> <Link to="/">Allen |</Link> <Link to="/Class11">Class 11 |</Link> <Link to="/Class12">Class 12 |</Link> </div> <Routes> <Route path='/' element={<Allen/>} /> <Route path='/Class11' element={<Class11/>} /> <Route path='/Class12' element={<Class12/>} /> </Routes> </BrowserRouter> </> )
Routing with useNavigate Hook
In React, another effective way to handle routing is by using the useNavigate hook, which is part of the react-router-dom package. This hook is particularly valuable when you want to implement routing that responds to specific user actions, such as clicking a button or submitting a form. Unlike the Link component, which is used for static navigation, useNavigate allows for more dynamic and flexible routing.
The useNavigate hook returns a function that you can call with the path you want to navigate to. This makes it possible to redirect users based on various conditions or application states.
For instance, you might want to redirect a user to a different page after they successfully submit a form or complete a certain task. By using useNavigate, you can achieve this programmatically, ensuring a smooth and responsive user experience.
Here's a basic example of how you might use the useNavigate hook in a React component:
jsximport { useNavigate } from 'react-router-dom'; function MyComponent() { const navigate = useNavigate(); const handleButtonClick = () => { // Perform some logic here navigate('/target-path'); }; return ( <button onClick={handleButtonClick}> Go to Target Page </button> ); }
In this example, when the button is clicked, the handleButtonClick the function is triggered, which then calls the navigate function with the desired path. This approach provides a seamless way to manage navigation in response to user interactions, enhancing the overall functionality of your application.
Setup Error Page Route in React
In React, setting up error pages for routes that are not available in our application is a crucial step in improving the overall user experience. When users navigate to a URL that doesn't match any of the defined routes, they can be greeted with a custom error page, often referred to as a "404 page."
This page informs users that the page they are looking for cannot be found, and it can also provide helpful navigation options to guide them back to the main sections of the app.
To implement this, you can define a route that matches all paths not covered by other routes. This is typically done using a wildcard route that catches any unmatched paths. Here’s a detailed example of how you can set up an error page route in a React application using react-router-dom:
jsximport React from 'react'; import { BrowserRouter, Router, Routes, Route } from 'react-router-dom'; function HomePage() { return <h1>Welcome to the Home Page</h1>; } function AboutPage() { return <h1>About Us</h1>; } function ErrorPage() { return ( <div> <h1>404 - Page Not Found</h1> <p>Sorry, the page you are looking for does not exist.</p> <a href="/">Go back to Home</a> </div> ); } function App() { return ( <Router> <Routes> <Route path="/" element={<HomePage />} /> <Route path="/about" element={<AboutPage />} /> <Route path="*" element={<ErrorPage />} /> </Routes> </Router> ); } export default App;
In this example, we have a simple React application with three components: HomePage, AboutPage, and ErrorPage. The ErrorPage component is designed to display a message indicating that the requested page could not be found. It also includes a link to redirect users back to the homepage.
The App component uses react-router-dom to define routes for the home and about pages. The wildcard route (path="*") is used to catch all unmatched URLs, directing users to the ErrorPage component.
This setup ensures that users have a clear understanding of what went wrong and provides them with an easy way to navigate back to familiar territory, thereby enhancing the overall usability and professionalism of your application.
Layout in React with Outlet
In React JS, the Outlet component plays a crucial role when working with nested routes, particularly in applications using react-router-dom. It acts as a placeholder that renders child routes within a parent route.
This is especially useful when you want to create a layout that remains consistent across different sections of your application while allowing specific content to change based on the current route.
For instance, consider a scenario where you have a main dashboard page with several sub-sections like "Profile," "Settings," and "Notifications." You can define a parent route for the dashboard and use the Outlet component to render the appropriate child component based on the user's navigation. This approach not only simplifies the routing logic but also ensures that the overall structure and layout of the dashboard remain intact, providing a seamless user experience.
javascriptfunction App() { return ( <> <BrowserRouter> <Routes> <Route path='/' element={<Layout />} > <Route index element={<Allen />} /> <Route path='/Class11' element={<Class11 />} /> <Route path='/Class12' element={<Class12 />} /> </Route> </Routes> </BrowserRouter> </> ) function Layout() { return ( <> <div> <Header /> </div> <div> <Outlet /> </div> <div> <Footer /> </div> </> ) }
useRef Hook in React?
In React, useRef is a hook that provides a way to create a reference to a value or a DOM element that persists across renders but does not trigger a re-render when the value changes.
Key Characteristics of useRef:
-
Persistent Across Renders: The value stored in
useRefstays the same between component re-renders. This means the value of arefdoesn't reset when the component re-renders, unlike regular variables. -
No Re-Renders on Change:Changing the value of a
ref(ref.current) does not make a component re-render. This is different from state (useState), which causes a re-render when it changes.
Example
javascriptconst btnRef=useRef() return ( <> Signup Form <input ref={btnRef} type="text" /> <input type="text" /> <button onClick={()=>{ btnRef.current.focus() }}>Submit</button>
Week 10 Day 2
Prop Drilling
Rolling up State in React
Rolling up state refers to combining several related state variables into one cohesive state object. Instead of having multiple useState hooks for each individual piece of state, you use a single useState hook with an object that holds all the related state values. This can make your component's state easier to manage and understand.
Benefits of Rolling Up State
-
Simplified State Management: By consolidating state variables, you reduce the number of
useStatehooks, making your component's logic more straightforward and easier to follow. -
Easier Updates: When you need to update multiple related state values, you can do so in one operation, reducing the risk of errors and ensuring consistency.
-
Improved Readability: With a single state object, it becomes clearer how different pieces of state relate to each other, improving the readability of your code.
Example
javascriptimport { useState } from 'react' import reactLogo from './assets/react.svg' import viteLogo from '/vite.svg' import './App.css' function App() { return ( <> <LightBulb></LightBulb> </> ) } function LightBulb() { const [bulb, setbulb] = useState(false) return ( <> <BulbState bulb={bulb}></BulbState> <ToggleButton setbulb={setbulb} bulb={bulb}></ToggleButton> </> ) } function BulbState({bulb}) { return ( <> {bulb ? "Bulb is on" : "Bulb is off"} </> ) } function ToggleButton({setbulb}) { return ( <> <br /> <button onClick={()=>setbulb(currentState=>!currentState)}>Toggle Button</button> </> ) } export default App
Context API
Context API is a powerful tool in React that helps us manage state more effectively across our app, especially with deeply nested components.
It allows us to share values (like state or functions) directly with a component without manually passing props down through each level. By using the Context API, we can create a context object that holds the data or functions we want to share.
This context object can then be accessed by any component within the application that subscribes to it, allowing for a more efficient and cleaner way to manage the state.
Setting up Context API in React
Step 1. Import the useContext and createContext from react.
javascriptimport { useContext, createContext } from 'react'
Step 2. Define a variable and call the createContext() function.
javascriptconst BulbContext = createContext()
Step 3. Next, you have to wrap the component under the context provider to specify the value of its context.
javascriptimport { useState, useContext, createContext } from 'react' import reactLogo from './assets/react.svg' import viteLogo from '/vite.svg' import './App.css' const BulbContext = createContext() function App() { const [bulb, setbulb] = useState(false) return ( <> <BulbContext.Provider> <LightBulb></LightBulb> </BulbContext.Provider> </> ) }
Step 4. Now, you must provide the value you want to pass on in the provider wrapper.
javascriptimport { useState, useContext, createContext } from 'react' import reactLogo from './assets/react.svg' import viteLogo from '/vite.svg' import './App.css' const BulbContext = createContext() function App() { const [bulb, setbulb] = useState(false) return ( <> <BulbContext.Provider value={{ bulb: bulb, setbulb: setbulb }}> <LightBulb></LightBulb> </BulbContext.Provider> </> ) }
Step 5. To utilize the variable you have passed through the context provider, you need to destructure it and define a hook using useContext.
This hook requires the context you created as its argument. By doing so, you can easily access the context values within your component.
javascriptfunction BulbState() { const { bulb } = useContext(BulbContext) return ( <> {bulb ? "Bulb is on" : "Bulb is off"} </> ) }
Custom Hook
A custom hook in React is a special function that internally leverages one or more built-in React hooks. This allows you to encapsulate and reuse stateful logic across different components cleanly and efficiently.
By creating custom hooks, you can extract component logic into reusable functions, making your code more modular and easier to maintain.
Example
javascriptimport React from "react"; import { useState } from "react"; function useCounter() { const [count, setCount] = useState(0) function increaseCount() { setCount(c => c + 1) } return { count: count, increaseCount: increaseCount } } function App() { const{count,increaseCount}=useCounter() return ( <> <button onClick={increaseCount}>Counter {count}</button> </> ) } export default App
useFetch Hook
The useFetch hook in React is a custom hook designed to simplify the process of making server requests and retrieving data. This hook is particularly useful when you need to perform asynchronous operations, such as fetching data from an API endpoint.
useFetch Hook Example
bashimport { useEffect } from "react"; import { useState } from "react"; export function useFetch(url) { const [loading, setLoading] = useState(false); const [data, setData] = useState({}) async function fetchData() { setLoading(true); const requestURL = await fetch(url) const response = await requestURL.json() setData(response) setLoading(false); } useEffect(() => { fetchData() }, [url]) return {data,loading} }
uses of useFetch Hook
bashimport React, { useState, useEffect } from "react"; import { useFetch } from "./useFetch"; function App() { const [post,setPost]=useState(1) const { data, loading } = useFetch("https://jsonplaceholder.typicode.com/posts/"+post) return ( <> <button onClick={()=>setPost(1)}>Post 1</button> <button onClick={()=>setPost(2)}>Post 2</button> <button onClick={()=>setPost(3)}>Post 3</button> {loading ? <div>Loading...</div> : <pre>{JSON.stringify(data.title)}</pre>} </> ) } export default App;
javascriptimport React, { useEffect } from "react" import { useState } from "react" export function useFetch(url,retryTimeout) { const [post, setPost] = useState({}) const [loading,setLoading]=useState(false) async function getPost() { setLoading(true) let data = await fetch(url) let response= await data.json() setPost(response) setLoading(false) } useEffect(() => { getPost() }, [url]) useEffect(() => { let autoPostFetch= setInterval(() => { getPost() }, retryTimeout*1000); return () => { clearTimeout(autoPostFetch) } }, []) return { post, loading } }
usePrev Hook
The usePrev hook is a custom hook that uses the useRef hook to keep track of the previous value of a state. For example, if a state changes from 0 to 1 and then from 1 to 2, it stores the previous state value, which is 1, when the state updates from 1 to 2.
usePrev hook Example -
javascriptimport { useEffect, useRef } from "react"; export function usePrev(value) { const ref = useRef() // effect will run afeter the ref.current useEffect(()=>{ ref.current=value },[value]) // React return first and then update return ref.current // ref.current first run first }
Use Case of usePrev hook
javascriptimport React from "react"; import { useState } from "react"; import { useFetch } from "./hooks/UseFetch"; import { usePrev } from "./hooks/UsePrev"; function App() { const [count,setCount]=useState(0) const prev=usePrev(count) return ( <> <p>Counter value {count}</p> <button onClick={()=>{setCount((c)=>c+1)}}>Increae Value</button> <p>Previous value {prev}</p> </> ) } export default App
useDebounce Hook
The useDebounce hook is a custom hook that delays the execution of a function until a certain amount of time has passed since it was last called. This is useful for optimizing performance by limiting the number of times a function runs, especially in response to user input or other frequent events.
Here's a simple example of how you might use the useDebounce hook:
javascriptimport React from 'react'; import { useRef } from 'react'; import { useState } from 'react'; function useDebounce(originalfn) { const currentClock = useRef() const fn = () => { clearTimeout(currentClock.current) currentClock.current = setTimeout(originalfn, 200) } return fn } function DebounceHook() { function callFunction() { fetch("https://jsonplaceholder.typicode.com/posts/1/") } const debounceHook = useDebounce(callFunction) return ( <> <input type="text" onChange={debounceHook} /> </> ); } export default DebounceHook;
Recoil (State Management Library)
Recoil in React is a state management library that provides a way to manage the global state with fine-grained controls. It minimises unnecessary re-renders by rendering the component that depends on the changed atom.
Atom in Recoil
Atoms in Recoil are the units of states that can be read and written from any component. The component that subscribes or uses those atoms will re-render if the atom state changes
How to use Recoil in our Project
Step 1. First, install the recoil npm package for your project.
bashnpm i recoil
Step 2. Wrap the main component with <RecoilRoot>
javascriptfunction App() { return ( <> <RecoilRoot> <Counter /> </RecoilRoot> </> ) }
Step 3. Create a counter atom that takes two values: the first is a the key to uniquely uniquely identifying the atom, and the second its its default value. You can define these atoms in a separate file and import them into your app or component.
javascriptconst counterAtom=atom({key:"counter",default:0})
Step 4. Now, use the useRecoilValue(atomName) hook and pass the name of the atom you want to display in the app or component.
javascriptimport React from 'react' import reactLogo from './assets/react.svg' import viteLogo from '/vite.svg' import './App.css' import { RecoilRoot,useRecoilState,useRecoilValue,useSetRecoilState } from 'recoil' import {counterAtom} from './store/Counter' function App() { return ( <> <RecoilRoot> <Counter /> </RecoilRoot> </> ) } function Counter() { return ( <> <CurrentCount/> <Increase/> <Decrease/> </> ) } function CurrentCount(){ const count=useRecoilValue(counterAtom) return( <div>{count}</div> ) }
Step 5. Next, you need to use the useSetRecoilState(atomName) hook, which is a setter function to update the atom's value. This is how you can change the atom's value.
javascriptimport React from 'react' import reactLogo from './assets/react.svg' import viteLogo from '/vite.svg' import './App.css' import { RecoilRoot,useRecoilState,useRecoilValue,useSetRecoilState } from 'recoil' import {counterAtom} from './store/Counter' function App() { return ( <> <RecoilRoot> <Counter /> </RecoilRoot> </> ) } function Counter() { return ( <> <CurrentCount/> <Increase/> <Decrease/> </> ) } function CurrentCount(){ const count=useRecoilValue(counterAtom) return( <div>{count}</div> ) } function Increase() { const setCount=useSetRecoilState(counterAtom) function increase() { setCount(c => c + 1) } return ( <button onClick={increase}>Increase</button> ) } function Decrease() { const setCount=useSetRecoilState(counterAtom) function decrease() { setCount(c => c - 1) } return ( <button onClick={decrease}>Decrease</button> ) } export default App
Memo API in React
The Memo API in React is a powerful tool designed to optimize the performance of functional components. It allows developers to prevent unnecessary re-renders by memoizing the output of a component.
This means that React will only re-render the component if its props have changed, thus saving computational resources and improving the overall efficiency of the application.
When you use the React.memo function, you wrap your component with it, and React will remember the result of the last render. If the component receives the same props as before, React will skip rendering the component and reuse the last rendered output.
This is particularly useful in scenarios where components rely on expensive calculations or when dealing with large data lists.
To implement the Memo API, you simply import React and use React.memo to wrap your component. For example:
jsximport React from 'react'; const MyComponent = React.memo((props) => { // Component logic here return <div>{props.value}</div>; });
In this example, MyComponent will only re-render if the value prop changes. This can significantly enhance the performance of your application, especially when dealing with complex UI structures or when components are nested deeply within the component tree.
Additionally, React.memo can accept a second argument, a custom comparison function, which allows you to define more complex logic for determining when a component should re-render. This function receives the previous and next props as arguments and should return true if the props are equal and false if they are not.
By leveraging the Memo API, developers can create more efficient, responsive, and performant React applications that provide a better user experience.
UI/UX for Developers
Typography
A typeface is the design of the letter. Font means a specific weight and style within the typeface like Poppins Bold 16px . A typeface is the family and a font is an individual font member.
TailwindCSS: A Utility-First CSS Framework
TailwindCSS is a popular utility-first CSS framework that provides developers with a set of predefined classes to build custom designs directly in their markup.
Unlike traditional CSS frameworks that offer a set of pre-designed components, TailwindCSS focuses on providing low-level utility classes that enable developers to create unique designs without having to write custom CSS.
Key Features of TailwindCSS
-
Utility-First Approach: TailwindCSS is built around the concept of utility classes, which are single-purpose classes that apply specific styles. This approach allows developers to quickly style elements by combining these classes directly in the HTML.
-
Highly Customizable: TailwindCSS offers extensive customization options. Developers can easily configure the framework to fit their design requirements by modifying the default theme, extending it with custom styles, or even replacing it entirely.
-
Responsive Design: TailwindCSS includes responsive utility variants, making it straightforward to create responsive designs. Developers can apply different styles at various breakpoints by using responsive prefixes, ensuring that their designs look great on all devices.
-
Built-in Dark Mode Support: Tailwindcss provides built-in support for dark mode, allowing developers to switch between light and dark themes easily. This feature is increasingly important as more users prefer dark mode for its aesthetic and potential eye strain reduction.
-
Community and Ecosystem: TailwindCSS has a vibrant community and a growing ecosystem of plugins and tools. Developers can leverage these resources to extend the framework's functionality and integrate it seamlessly into their projects.
Benefits of Using TailwindCSS
-
Rapid Prototyping: With its utility-first approach, TailwindCSS allows developers to quickly prototype designs without writing custom CSS. This speed is invaluable during the early stages of development when iterating on design ideas.
-
Consistent Design: By using predefined utility classes, developers can maintain a consistent design language across their projects. This consistency helps in creating cohesive and professional-looking interfaces.
-
Reduced CSS Bloat: TailwindCSS encourages the use of utility classes over custom styles, which can lead to a significant reduction in CSS bloat. The JIT compiler further minimises the final CSS size by including only the styles used in the project.
-
Improved Collaboration: TailwindCSS's class-based approach makes it easier for teams to collaborate on design and development. Designers and developers can work together more effectively, as the design language is clearly defined and easy to understand.
Setting Up TailwindCSS v4 in a React Project
Integrating TailwindCSS into a React project can significantly enhance your development workflow by providing a streamlined and efficient way to style your application. Here’s a step-by-step guide to setting up TailwindCSS in a React project:
-
Create a New React Project: If you haven't already, start by creating a new React project. You can do this using
npm create vite@latest, which sets up a modern web development environment with no configuration needed. Run the following command in your terminal:bashnpm create vite@latest -
Install TailwindCSS: Once your React project is ready, you need to install TailwindCSS. Use npm or yarn to add these packages to your project:
bashnpm install tailwindcss @tailwindcss/vite -
Configure Vite Plugin: After installing TailwindCSS, you need to configure it in
vite.config.jsthe file.javascriptimport { defineConfig } from 'vite' import tailwindcss from '@tailwindcss/vite' // Import TailwindCSS and add in the condigutation export default defineConfig({ plugins: [ vite(),tailwindcss(), ], }) -
CSS Imports: Add the
@import "tailwindcss";is your main CSS file likeapp.cssindex.csscss@import "tailwindcss"; -
Start Your Development Server: Now that TailwindCSS is set up, you can start your development server to see TailwindCSS in action. Run the following command:
bashnpm run dev
Following these steps will help you effectively set up TailwindCSS in your React project, enabling you to take advantage of its powerful utility-first approach to styling. This setup will not only speed up the development process but also ensure a consistent and maintainable design system across your application.
Customisation in TailwindCSS
Manual Dark Mode and Light Mode in TailwindCSS
Creating a Transition in TailwindCSS
Creating a Transition Slidebar with TailwindCSS
Mobile First Approach
Week-14 TypeScript
What is Typescript?
TypeScript is a language developed and maintained by Microsoft. It is built on top of JavaScript, meaning it includes type safety checks that help catch errors during compilation, similar to compilers used for languages like Java and C++.
TypeScript code can't run directly in your browser or even in Node.js. First, it must be compiled into JavaScript, which can then be used for deployment. When TypeScript compiles the code into JavaScript, it performs type checks similar to other languages like Java and C++. The compilation will stop if there is an error.
This process allows you to catch compilation errors and safeguard your code, preventing runtime errors when users are using the application.
TypeScript Compiler
Types of Languages
-
Strongly typed languages - Languages that have strict type-checking are known as strongly typed languages, like C++, Java, Rust, etc. For example, suppose we initialise a variable and define it will store a number, and in the next step we store a string value, then in this case it will throw an error.
-
Loosely typed languages - Languages that do not have strict type-checking are known as loosely typed languages, like JavaScript and Python, etc. For example, suppose we initialise a variable and define it will store a number, and in the next step we store a string value, then in this case these languages don’t throw an error.
-
Compiled Language
-
Interpreted Language
-
High-Level Language - Languages that are readable by humans and understandable are called high-level languages like Java, c++, python, and javascript
-
Low-Level Language - Languages that are not readable by humans and not understandable these languages are called low-level language example -Assembly language
Compilation
A process of checking whether the code is correct or not. If the code has a mistake, then the compilation will fail and give a compilation error. The compiler do the compiler process
Interpreted Language
The interpreted language does both compile and run the code. If the code has a mistake or any issue, then it will throw a runtime error. An interpreted language is faster while writing code because of its two-way combined steps, including compilation and interpretation
Initialise the typescript project
Step 1. npm i -g typescript used to install the typescript in your system globally
Step 2. npx tsc —init to initialise the typescript project
Types in TypeScript
-
Number
-
String
-
Any
Callback function type in TypeScript
typescriptfunction delayedCall(anotherFunction:()=>void){ //code }
tsConfig File
tsConfig file has some options that you can choose to configure your tsc compiler.
Some of the most common configuration options are
-
target is configuring which ECMAScript version of JavaScript code you want to convert. This is important if you want your code will work on older browsers.
-
rootDir: used to set the root directory of the TypeScript code to split out the js code. So, that you will have a better and more manageable folder structure.
-
outDir: used to set the output directory of the TypeScript code to store the split-out compiler js code.
Interface in TypeScript
In TypeScript, interfaces are a powerful way to define the structure of an object. They allow you to specify the types of properties that an object should have, making your code more robust and easier to understand. When you pass an interface as a type, you are essentially using it to enforce a specific shape for the objects you work with. This ensures that the objects conform to the expected structure, reducing errors and improving code quality.
Optional Parameter in TypeScript
typescriptinterface user{ name:string, age:number, address:{ // addind ? after city makes it optional city?:string, country:string } } let user:user ={ name:"Aditya", age:21, address:{ // city:"Haryanya", country:"India" } }
Passing Interface as a Type
When you pass an interface as a type, you are essentially using it to enforce a specific shape for the objects you work with. This ensures that the objects conform to the expected structure, reducing errors and improving code quality.
To pass an interface as a type, you first define the interface with the desired properties and their types. For example, consider an interface named User:
typescriptinterface User { name: string; age: number; address: { city?: string; // The question mark indicates that this property is optional country: string; }; }
In this example, the User The interface specifies that a user object must have a name of type string, an age of type number, and an address object. The address The object must include a country of type string, and may optionally include a city of type string.
Once the interface is defined, you can use it to type-check objects. For instance, you can create a variable user and assign it an object that adheres to the User interface:
typescriptlet user: User = { name: "Aditya", age: 21, address: { country: "India" } };
In this case, the user The object meets the requirements of the User interface. It includes the mandatory name and age properties, as well as the address object with the required country property. The optional city The property is omitted, which is perfectly acceptable due to its optional nature.
By using interfaces as types, you can ensure that your objects are consistently structured, making your TypeScript code more predictable and easier to maintain. This practice is especially useful in larger projects where the complexity of data structures can increase significantly.
Difference Between Interfaces and Types
Interfaces can be implemented as classes you can’t to in types
| Aspect | Interface | Type |
|---|---|---|
| Definition | Used to define a contract for classes, specifying what methods and properties should be implemented. | Used to define a type for variables, functions, or objects, specifying the shape of data. |
| Extensibility | It can be extended by other interfaces or classes using the extends keyword. | Can be extended using intersection types with &. |
| Declaration Merging | Supports declaration merging, allowing multiple declarations to be combined. | Does not support declaration merging. |
| Use Cases | Best for defining the structure of classes and ensuring they adhere to a specific contract. | Best for defining complex types, unions, and intersections. |
| Implementation | It can be implemented by classes to ensure they follow the defined structure. | It cannot be implemented by classes. |
| union & intersection | Interfaces don’t allow union and intersection | Type allows union and intersection |
| Syntax | Uses the interface keyword. | Uses the type keyword. |
Difference Between Abstract Class and Interfaces
- We can implement
Types in TypeScript
Week 16
Websockets
Websockets provide a way to establish a persistent connection between the client and the backend, full duplex communication channel over a single TCP connection between the client (typically a web browser) and the server.
Websockets offer a robust method for creating a continuous, two-way communication channel between a client, such as a web browser, and a backend server.
Unlike traditional HTTP requests, which require a new connection for each interaction, websockets maintain a single, persistent connection.
This allows for real-time data exchange, making it particularly useful for applications that require instant updates, such as chat applications, live sports scores, or financial trading platforms.
The WebSocket protocol begins its life as an HTTP handshake, which is then upgraded to the WebSocket protocol, enabling this efficient, ongoing data transfer.
This setup not only reduces the latency associated with starting new connections but also decreases the overhead of HTTP headers, resulting in a more efficient and responsive user experience.
Persistent Connection
A persistent connection, also known as a long-lived connection, is a communication link that stays open for a long time, enabling ongoing data exchange between a client and a server. This connection is important when real-time data transfer is needed, like in online gaming, live video streaming, or instant messaging apps.
3-Way Handshake
The 3-way handshake is a fundamental process used to establish a reliable connection between a client and a server over a network. This process is crucial for ensuring that both parties are ready to communicate and that the data can be transferred reliably. The handshake involves three distinct steps:
-
SYN (Synchronise): The client initiates the connection by sending a SYN packet to the server. This packet contains an initial sequence number, which is used to synchronise the sequence numbers between the client and the server. The SYN packet essentially acts as a request to establish a connection.
-
SYN-ACK (Synchronise-Acknowledge): Upon receiving the SYN packet, the server responds with a SYN-ACK packet. This packet serves two purposes: it acknowledges the receipt of the client's SYN packet by including an acknowledgement number, and it also sends its own SYN packet with a sequence number to the client. This step ensures that the server is ready to establish a connection and synchronise sequence numbers.
-
ACK (Acknowledge): Finally, the client sends an ACK packet back to the server. This packet acknowledges the receipt of the server's SYN-ACK packet. Once the server receives this ACK packet, the connection is fully established, and data can begin to flow between the client and the server.
Week-17
PostgreSQL SQL
What are NoSQL databases
NoSQL databases don't have a fixed structure. This means you can store any type of data without limits. MongoDB is a well-known example of a NoSQL database.
Note: When we define a schema with Mongoose, it's done at the Node.js level, and having a schema is helpful.
Advantages of NoSQL Databases
-
NoSQL databases are great for handling large volumes of data and complex structures.
-
They scale horizontally, allowing you to add more servers as data grows, which is easier than vertical scaling in traditional databases.
-
They are schema-less, so you don't need a predefined structure, making it easy to change the data model without complex migrations or downtime.
-
NoSQL databases handle complex data types well, such as unstructured or semi-structured data like JSON or XML.
-
They are ideal for applications with diverse data formats or those that need rapid development cycles.
-
Their adaptability and ease of use make them a popular choice for modern applications that require high performance and flexibility..
Why Not NoSQL Databases
-
Data Inconsistency: One of the main challenges with NoSQL databases is the potential for data inconsistency. Since they do not enforce a strict schema, there is a risk that different parts of the application may interpret the data differently. This can lead to situations where the data is not uniformly updated or maintained, causing discrepancies.
-
Runtime Errors: The flexibility of NoSQL databases can sometimes lead to runtime errors. Without a predefined schema to validate the data before it is stored, errors may only surface during application execution. This can make debugging more difficult, as issues may not be detected until the application is running in a production environment.
-
Excessive Flexibility: While flexibility is often seen as an advantage, it can be a drawback for applications that require strict data integrity and consistency. In scenarios where data relationships and constraints are crucial, the lack of enforced structure in NoSQL databases can lead to challenges in maintaining data accuracy and reliability. For applications that demand a high level of strictness, a more structured database approach may be necessary to ensure data quality.
When to Use?
When we want to do rapid development like hackathons,
Types of Databases
SQL Databases
SQL databases are structured databases that use a fixed schema to define data organisation. They are ideal for applications requiring complex queries and transactions. SQL databases ensure data integrity and consistency, making them suitable for applications where data accuracy is crucial.
Vector Databases
Vector databases, such as Pinecone, are designed to handle high-dimensional data, which is often used in machine learning and AI applications. They efficiently store and retrieve vector embeddings, enabling fast similarity searches and recommendations. These databases are optimised for performance and scalability in handling large datasets.
NoSQL Databases
NoSQL databases are flexible databases that do not rely on a fixed schema, allowing for the storage of unstructured and semi-structured data. They are highly scalable and can easily adapt to changes in data models. NoSQL databases are well-suited for applications with large volumes of diverse data, such as social media platforms and real-time analytics.
-
Store data without a fixed schema for speed and efficiency.
-
Examples - MongoDB
Graph Databases
Graph databases, like Neo4j, are designed to represent and store data in a graph format, with nodes and edges. They excel at managing relationships between data points, making them ideal for applications like social networks, recommendation engines, and fraud detection. Graph databases provide efficient querying of complex relationships and patterns within data.
-
Data is stored as a graph, which is especially useful for storing
relationships(like in social networks). -
Examples - Neo4j
SQL Databases
SQL databases use structured query language (SQL) to manage data. They organise data in rows and columns. Many full-stack applications use SQL databases. Examples include MySQL and PostgreSQL.
-
Stores data in rows
-
Most full-stack applications use this
-
Examples - MySQL, Postgres
Why Not NoSQL
You might have used MongoDB it in your projects. Its schemaless Properties make it an excellent choice for quickly bootstrapping a project. This flexibility allows developers to start building applications without the need to define a rigid schema upfront, which can significantly speed up initial development.
However, this lack of structure can lead to challenges as your application grows and becomes more complex. Without a fixed schema, it becomes very easy for data to become corrupted or inconsistent, as different parts of the application might write data in varying formats or with unexpected fields.
What Does Schemaless Mean?
In a schemaless databases, different rows, or documents, can have different schemas (keys/types). This means that each entry in the database can have its own unique structure, with different fields and data types.
While this offers great flexibility, it also requires careful management and validation to ensure data integrity as the application scales. Without proper oversight, data structure diversity can lead to data consistency and reliability issues, making it harder to maintain and query the database effectively over time.

Problems?
-
Can make the database inconsistent
-
Can cause errors during runtime
-
Is too flexible for apps that need strict rules
Upsides?
-
Can develop quickly
-
Can easily modify the schema
Creating a PostgreSQL Database
You can start a PostgreSQL database in several ways:
-
Using neondb
-
Using Docker locally
-
Using Docker on Windows
The connection string is similar to the one used in Mongoose.
Connection String
Using a Library to Connect and Store Data
- psql
psql is a powerful terminal tool for working with PostgreSQL (or TimescaleDB) databases. It allows you to execute SQL queries, create tables, add data, and run complex queries. It also supports scripting and automation, making it useful for database administrators and developers. psql offers various options to customize your database interaction.
Connecting to Your Database
psql is included with PostgreSQL and provides a robust command-line interface for database management. However, for this tutorial, we'll focus on connecting to the database directly from Node.js, allowing us to manage database tasks using JavaScript within our app. This approach keeps everything in the same programming environment, simplifying app maintenance and growth.
Example command to connect using psql:
typescriptpsql -h p-broken-frost-69135494.us-east-2.aws.neon.tech -d database1 -U 100xdevs
- pg
pg is a Node.js library that allows you to interact with a PostgreSQL database in your backend application, similar to mongoose. We will install this library in our app later.
Connecting PostgreSQL and Storing Data in an SQL Table
Step 1: Visit neon.tech and set up your database.
Step 2: Create a SQL table and copy the connection string.
Creating a Table and Defining Its Schema
In SQL, a single database can have multiple tables, similar to collections in a MongoDB database. With Postgres, the next step is to define the schema for your tables.
SQL (Structured Query Language) is used to specify how data is stored in the database. To create a table, use the following command:
sqlCREATE TABLE users ( id SERIAL PRIMARY KEY, username VARCHAR(50) UNIQUE NOT NULL, email VARCHAR(255) UNIQUE NOT NULL, password VARCHAR(255) NOT NULL, created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP );
Decoding the SQL Statement:
-
CREATE TABLE users: Initiates the creation of a new table named
users. -
id SERIAL PRIMARY KEY:
-
id: The first column, uniquely identifying each row (user), similar to_idin MongoDB. -
SERIAL: A PostgreSQL data type that auto-increments for each new row, ensuring uniqueids. -
PRIMARY KEY: Ensures theidcolumn is the main identifier, with unique, non-empty values.
-
-
email VARCHAR(255) UNIQUE NOT NULL:
-
email: The second column, storing the user's email address. -
VARCHAR(255): A text data type allowing up to 255 characters. -
UNIQUE: Ensures all emails are distinct. -
NOT NULL: Prevents empty values; every user must have an email.
-
-
password VARCHAR(255) NOT NULL: Similar to the email column, but can be non-unique.
-
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP:
-
created_at: Records when the user was created. -
TIMESTAMP WITH TIME ZONE: Stores date, time, and time zone for accurate tracking. -
DEFAULT CURRENT_TIMESTAMP: Automatically sets the current date and time when a row is added.
-
Step 3: Install the pg Using the command:
typescriptnpm i pg
Step 4: Create an Express server and import the Client method from the pg package.
Step 5: Connect the database using the connection string.
javascriptimport { Client } from "pg"; import express from "express"; const app = express(); const port = 3000; const pgClient = new Client("yourConnectionStringURL"); pgClient.connect(); app.use(express.json()); app.post("/signup", (req, res) => { const { username, email, password } = req.body; try { // SQL Query to insert data into the table, preventing SQL injections const insertQuery = `INSERT INTO users (username, email, password) VALUES ($1, $2, $3)`; const responseQuery = pgClient.query(insertQuery, [username, email, password]); console.log(responseQuery); } catch (error) { console.log(error); } });
This setup allows you to manage your PostgreSQL database using Node.js, ensuring efficient data handling and integration with your application logic.
Primary Key and Foregin Key in SQL
In SQL databases, primary keys and foreign keys are essential for establishing relationships between tables and ensuring data integrity.
Primary Key: A primary key is a unique identifier for each record in a table. It ensures that no two rows have the same value in the primary key column(s). A primary key must contain unique values and cannot contain NULLs.
Syntax for Primary Key:
sqlCREATE TABLE table_name ( column1 datatype PRIMARY KEY, column2 datatype, ... );
Foreign Key: A foreign key is a column or a set of columns in one table that refers to the primary key in another table. It establishes a relationship between the two tables, ensuring that the value in the foreign key column matches a value in the referenced primary key column.
Syntax for Foreign Key:
sqlCREATE TABLE table_name ( column1 datatype, column2 datatype, ... FOREIGN KEY (column_name) REFERENCES other_table_name(primary_key_column) );
Connecting Primary Key and Foreign Key: To connect a primary key and a foreign key, you define the foreign key in the child table, which references the primary key in the parent table. This relationship enforces referential integrity, ensuring that the foreign key value always corresponds to an existing primary key value in the parent table.
Example:
sqlCREATE TABLE departments ( department_id INT PRIMARY KEY, department_name VARCHAR(100) ); CREATE TABLE employees ( employee_id INT PRIMARY KEY, employee_name VARCHAR(100), department_id INT, FOREIGN KEY (department_id) REFERENCES departments(department_id) );
In this example, department_id is the primary key in the departments table and a foreign key in the employees table, linking the two tables.
Relationships in SQL
In SQL, relationships between tables are established using keys. The most common types of relationships are:
-
One-to-One: Each row in Table A is linked to one and only one row in Table B. This is often implemented using a foreign key in one table that references the primary key of another table.
sqlCREATE TABLE users ( id SERIAL PRIMARY KEY, username VARCHAR(255) ); CREATE TABLE profiles ( id SERIAL PRIMARY KEY, user_id INT REFERENCES users(id), bio TEXT ); -
One-to-Many: A row in Table A can be linked to multiple rows in Table B. This is implemented by having a foreign key in Table B that references the primary key of Table A.
sqlCREATE TABLE users ( id SERIAL PRIMARY KEY, username VARCHAR(255) ); CREATE TABLE posts ( id SERIAL PRIMARY KEY, user_id INT REFERENCES users(id), content TEXT ); -
Many-to-Many: Rows in Table A can be linked to multiple rows in Table B and vice versa. This is typically implemented using a junction table.
sqlCREATE TABLE students ( id SERIAL PRIMARY KEY, name VARCHAR(255) ); CREATE TABLE courses ( id SERIAL PRIMARY KEY, title VARCHAR(255) ); CREATE TABLE enrollments ( student_id INT REFERENCES students(id), course_id INT REFERENCES courses(id), PRIMARY KEY (student_id, course_id) );
Transactions in SQL
A transaction in SQL is a sequence of operations performed as a single logical unit of work. Transactions ensure data integrity and consistency. They follow the ACID properties:
-
Atomicity: Ensures that all operations within a transaction are completed; if not, the transaction is aborted.
-
Consistency: Ensures that a transaction brings the database from one valid state to another.
-
Isolation: Ensures that transactions are executed in isolation from one another.
-
Durability: Ensures that once a transaction is committed, it remains so, even in the event of a system failure.
Example of a transaction:
sqlBEGIN; INSERT INTO accounts (user_id, balance) VALUES (1, 1000); UPDATE accounts SET balance = balance - 100 WHERE user_id = 1; COMMIT;
Types of Joins
-
INNER JOIN: Returns records that have matching values in both tables.
sqlSELECT users.username, posts.content FROM users INNER JOIN posts ON users.id = posts.user_id; -
LEFT JOIN (or LEFT OUTER JOIN): Returns all records from the left table and the matched records from the right table. If no match, NULL values are returned for columns from the right table.
sqlSELECT users.username, posts.content FROM users LEFT JOIN posts ON users.id = posts.user_id; -
RIGHT JOIN (or RIGHT OUTER JOIN): Returns all records from the right table and the matched records from the left table. If no match, NULL values are returned for columns from the left table.
sqlSELECT users.username, posts.content FROM users RIGHT JOIN posts ON users.id = posts.user_id; -
FULL JOIN (or FULL OUTER JOIN): Returns all records when there is a match in either left or right table records. If there is no match, NULL values are returned for columns from the table without a match.
sqlSELECT users.username, posts.content FROM users FULL JOIN posts ON users.id = posts.user_id;
These concepts are fundamental to understanding how data is organised and manipulated in relational databases using SQL.
Week 18
Introduction to ORMs
What are ORMs?
Official Definition: ORM stands for Object-Relational Mapping, a programming technique used in software development to convert data between incompatible type systems in object-oriented programming languages. This technique creates a "virtual object database" that can be used from within the programming language.
Simplified Definition: ORMs let you easily interact with your database without worrying too much about the underlying syntax, such as SQL.
Why Use ORMs?
-
Simpler Syntax: ORMs convert objects to SQL queries under the hood, making database interactions more intuitive.
-
Database Abstraction: They provide a unified API, allowing you to switch databases without changing your code.
-
Type Safety and Auto-Completion: ORMs offer type safety and auto-completion features, enhancing development efficiency.
-
Automatic Migrations: ORMs handle database migrations automatically, simplifying schema evolution.
What is Prisma?
Prisma is a modern ORM that offers several advantages:
-
Data Model: Define your schema in a single file, specifying tables, fields, and relationships.
-
Automated Migrations: Prisma generates and runs database migrations based on changes to the schema.
-
Type Safety: It generates a type-safe database client.
-
Auto-Completion: Provides auto-completion features for efficient coding.
Setting Up Prisma in a New Application
Step 1: Initialise a Node.js Project
-
Create a new Node.js project:
bashnpm init -y -
Add necessary dependencies:
bashnpm install prisma typescript ts-node @types/node --save-dev -
Initialise TypeScript:
bashnpx tsc --init-
Change
rootDirtosrc -
Change
outDirtodist
-
Step 2: Initialise a Prisma Project
-
Initialise Prisma:
bashnpx prisma init
Step 3: Select Your Database
Prisma supports multiple databases like MySQL, Postgres, and MongoDB. Update prisma/schema.prisma to set up your chosen database and replace the database URL with your test URL.
Step 4: Define Your Data Model
In the schema.prisma file, define the shape of your data. For example, a User the table might look like this:
@id is used to define the primary key, and @default(autoincrement()) It is used to increase the value automatically.
typescriptmodel User { id Int @id @default(autoincrement()) username String @unique password String firstName String lastName String }
Step 5: Generate the Prisma Client
Generate the client to use in your Node.js app:
bashnpx prisma generate
Step 6: Create Your First Application
Insert Data
Write a function to insert data into the Users table:
typescriptimport { PrismaClient } from "@prisma/client"; const prisma = new PrismaClient(); async function insertUser(email: string, password: string, cit: string, age: number) { await client.users.create({ data: { email: "adityagupcsadasxcsdtapro@gmail.com", age: 23, city: "sdsdcsdc", password: "4324324@", } }) }
Update Data
Write a function to update the data in the Users table:
typescriptimport { PrismaClient } from "@prisma/client"; const prisma = new PrismaClient(); async function updateUser() { await client.users.update({ where: { email:"adityaguptapro@gmail.com" }, data:{ email:"adityakumargupta@gmail.com" } }) }
Fetch User Details
Write a function to fetch user details by email:
typescriptimport { PrismaClient } from "@prisma/client"; const prisma = new PrismaClient(); async function getUser(username: string) { // Implementation here }
Step 7: Define Relationships
Prisma allows you to define relationships between tables, such as One-to-One, One-to-Many, and Many-to-Many. For example, in a TODO app, you might have a One-to-Many relationship.
typescriptgenerator client { provider = "prisma-client-js" output = "../src/generated/prisma" } datasource db { provider = "postgresql" url = env("DATABASE_URL") } model Users { id Int @default(autoincrement()) @id email String @unique password String age Int city String todo Todo[] } model Todo{ id Int @default(autoincrement()) @id title String description String done Boolean userId Int time DateTime user Users @relation(fields: [userId],references: [id]) }
Update the Schema
Update your schema to reflect these relationships and run migrations:
bashnpx prisma migrate dev --name relationship npx prisma generate
Week 21
What is a Monorepo?
-
Monorepo stands for "mono-repository."
-
It's a single repository that holds the code for multiple projects.
-
Imagine having all your apps, libraries, and tools in one big folder!
Example:
-
A company has a web app, a mobile app, and shared libraries.
-
Instead of separate repositories, they keep everything in one monorepo.
Advantages and Disadvantages of Monorepo
| Advantages | Disadvantages |
|---|---|
| Easier code sharing and reuse | Can become large and complex |
| Simplified dependency management | Longer build times |
| Consistent tooling and configuration | Requires more robust CI/CD pipelines |
| Easier refactoring across projects | Potential for merge conflicts |
Common Interview Questions and Answers
-
What is a monorepo?
- A single repository containing multiple projects.
-
Why use a monorepo?
- For easier code sharing and consistent tooling.
-
What is Turborepo?
- A tool to manage monorepos efficiently.
-
How does Turborepo improve monorepos?
- By optimising build times and simplifying workflows.
What is Turborepo and Why It's Useful
-
Turborepo is a high-performance build system for JavaScript and TypeScript monorepos.
-
It speeds up builds by caching and running tasks in parallel.
-
Useful for large teams and projects to maintain efficiency.
Run the command to set up your turbo repo project
-
Install Turborepo:
bashnpx create-turbo@latestThe turbo repo's folder structure. The apps folder contains multiple apps that you can add or remove.
-

All the Ui of the app will be created inside the UI directory, which is inside the package folder
-

To Import Ui components from the ui directory to all your apps.
-
We have to use
import componentname "@repo/ui/componentname" syntax to import any ui component from the ui package.
javascript"use client" import TextInput from "@repo/ui/TextInput" import Button from "@repo/ui/button" import { useRouter } from "next/navigation" import { useState } from "react" export default function Home() { const [roomId, setroomId] = useState("") const router = useRouter() function handleChange(e: any) { setroomId(e.target.value) } function handleSubmit() { if (roomId.trim() == "") { alert("Please Enter Room id") return } else { router.push(`/chat/room/${roomId}`) } } return ( <div style={{ display: "flex", justifyContent: "center", alignItems: "center", width: "99vw", height: "100vh", background: "black", color: "white" }}> <div style={{display:"flex",flexDirection:"column",gap:"10px"}}> <TextInput onChange={handleChange} placeholder="Enter Room Id" /> <Button onClick={handleSubmit}>Join Meeting</Button> </div> </div> ) }
Adding the Common ts-Config file for all the backends
Step 1. Create a common JSON file in the package folder inside the typescript-config file
Step 2. Copy your typescript configuration.

Step 3. Now, whenever you use the backend, you don't need to add the full TypeScript configuration. You can simply extend it and provide the path to your config file that you created in the typescript-config file.
json{ "extends":"@repo/typescript-config/backend.json" }

Step 4. Also specify the rootdir and outdir with the compiler option in each backend app inside the tsconfig.json file

When working with Turborepo, remember that changes in environment files for Node.js applications might not trigger a rebuild by default. To ensure updates are reflected, configure your tasks to include these files as inputs.
About the Turbo.json File
json{ "$schema": "https://turborepo.com/schema.json", "ui": "tui", "tasks": { "build": { "dependsOn": ["^build"], "inputs": ["$TURBO_DEFAULT$", ".env*"], "outputs": [".next/**", "!.next/cache/**"] }, "lint": { "dependsOn": ["^lint"] }, "check-types": { "dependsOn": ["^check-types"] }, "dev": { "cache": false, "persistent": true } } }
The provided JSON snippet is a configuration file for Turborepo, which is used to manage tasks and dependencies in a monorepo setup. Here's a breakdown of each line and its meaning:
-
"$schema": "https://turborepo.com/schema.json": This line specifies the schema for the JSON file, which helps validate the configuration's structure and content. -
"ui": "tui": This line sets the user interface for Turborepo to "tui" (text user interface), which might be a custom or specific UI configuration. -
"tasks": This section defines various tasks that Turborepo can execute, such as build, lint, check-types, and dev.-
"build":-
"dependsOn": ["^build"]: This indicates that the build task depends on the build tasks of all dependencies. The caret (^) symbol is used to denote dependency tasks. -
"inputs": ["$TURBO_DEFAULT$", ".env*"]: Specifies the inputs for the build task.$TURBO_DEFAULT$is a placeholder for default inputs, and.env*includes any environment files. -
"outputs": [".next/**", "!.next/cache/**"]: Defines the outputs of the build task. It includes everything in the.nextdirectory except for the cache.
-
-
"lint":"dependsOn": ["^lint"]: Similar to the build task, this indicates that the lint task depends on the lint tasks of all dependencies.
-
"check-types":"dependsOn": ["^check-types"]: This indicates that the check-types task depends on the check-types tasks of all dependencies.
-
"dev":-
"cache": false: This setting disables caching for the dev task, meaning it will run fresh each time. -
"persistent": true: This indicates that the dev task should run persistently, likely for development purposes where continuous watching is needed.
-
-
To configure Turborepo so that changes in the server folder or independent files trigger a rebuild of the dist directory, you can modify the inputs section of the relevant task (e.g., build) to include the server folder or specific files. For example:
json"inputs": ["$TURBO_DEFAULT$", ".env*", "server/**", "independent-file.js"]
This configuration will ensure that any changes in the server directory or independent-file.js will trigger the build process, updating the dist directory accordingly.
How to Configure Backend Rebuilds to Respond to Changes and Manage Caching
When we run the npm run build command, all the packages are built and cached. If we make changes to the Node.js application code and run the build command again, it uses the cached version instead of building a fresh app. To force a fresh build of the backend, we need to modify the configuration.
Step 1. Create a turbo.json file inside the backend application.

Step 2. Then extend the code and specify the output to also include the dist folder.
json{ "extends": ["//"], "task": { "build": { "outputs": ["dist/**"] } } }
In this case, when we run the npm run build command globally, Turborepo is forced to build a fresh backend app and not use the cached version if the backend files change.
If we want to include any custom files in the fresh build, we can specify them globally.
json"build": { "dependsOn": ["^build"], "inputs": ["$TURBO_DEFAULT$", ".env*", "server/**", "independent-file.js"], "outputs": ["dist/**"] }
Week 21 Offline
Client Side Rendering
Client-side rendering (CSR) is a modern technique used in web development where the rendering of a webpage is performed in the browser using JavaScript. Instead of the server sending a fully rendered HTML page to the client, the server sends a minimal HTML page with a JavaScript file that takes over the rendering process on the client side.
A good example of CSR is React, a popular JavaScript library for building user interfaces. In a React project, the initial HTML file sent from the server is often empty or contains minimal content. The JavaScript code, once executed in the browser, dynamically generates and populates the content on the page.
Here's a simple demonstration of setting up a React project using Vite, a build tool that provides a fast development environment:
-
Initialize a React Project:
bashnpm create vite@latest -
Add Dependencies:
bashnpm i -
Start the Project:
bashnpm run build -
Serve the Project:
bashcd dist/ serve
When you open the network tab in your browser's developer tools, you'll notice that the initial HTML file doesn't have any content. This is because the JavaScript runs and actually populates/renders the contents on the page.
React (or CSR) simplifies the development process by allowing developers to write components, which JavaScript then renders to the DOM. However, CSR has some downsides:
-
Not SEO Optimised: Since the content is rendered on the client side, search engines may have difficulty indexing the page content.
-
Flash of Unstyled Content (FOUC): Users may see a flash before the page fully renders, as the JavaScript needs to load and execute.
-
Waterfalling Problem: The loading of resources can be sequential, leading to delays in rendering.
In comparison to other rendering techniques like Server-Side Rendering (SSR) and Static Site Generation (SSG), CSR offers a unique set of advantages and challenges. Understanding these differences is crucial for choosing the right approach for your web application.
Server Side Rendering
Server-side rendering (SSR) occurs when the server converts JavaScript components into HTML before sending them to the client.
Why Use SSR?
-
SEO Benefits: Improves search engine optimisation by providing fully rendered HTML to search engines.
-
Eliminates Waterfalling: Avoids the sequential loading of resources, leading to faster page loads.
-
No Flash of Unstyled Content: Ensures content is visible immediately without a white flash.
Try It Out with Next.js:
-
Create a Next.js app:
npx create-next-app -
Build the project:
npm run build -
Start the server:
npm run start -
Observe that the initial HTML page is already populated with content.
Downsides of SSR:
-
Costly: Each request requires server-side rendering, increasing server load.
-
Scalability Challenges: Difficult to scale as caching with CDNs is not possible.
Static Site Generation
Static Site Generation (SSG) is a powerful feature in Next.js that allows you to generate HTML pages at build time. This means that the HTML is pre-rendered and can be served quickly to users, often cached by a Content Delivery Network (CDN) for even faster delivery.
Why Use Static Site Generation?
Using SSG defers the expensive operation of rendering a page to the build time, so it only happens once. This approach is beneficial for pages that do not change often and can be served as static content. It improves performance and reduces server load, making it ideal for pages with content that is the same for all users.
How to Implement Static Site Generation
Let's walk through an example of implementing SSG in a Next.js application using a global todos endpoint.
Step 1: Create a New Next.js Project
First, create a new Next.js project if you haven't already:
bashnpx create-next-app@latest my-next-app cd my-next-app
Step 2: Create a Static Page
Create a new file todos/page.tsx in your project:
typescriptexport default async function Todos() { const res = await fetch('https://sum-server.100xdevs.com/todos'); const data = await res.json(); const todos = data.todos; return ( <div> {todos.map((todo: any) => ( <div key={todo.id}> <h3>{todo.title}</h3> <p>{todo.description}</p> </div> ))} </div> ); }
Step 3: Update Fetch Requests for Revalidation
To ensure the data is fresh, you can set the cache to clear every 10 seconds:
typescriptconst res = await fetch('https://sum-server.100xdevs.com/todos', { next: { revalidate: 10 } });
Step 4: Clear Cache with Next.js Actions
You can also clear the cache using Next.js actions:
typescriptimport { revalidateTag } from 'next/cache'; const res = await fetch('https://sum-server.100xdevs.com/todos', { next: { tags: ['todos'] } }); 'use server'; export default async function revalidate() { revalidateTag('todos'); }