read 58 min
70 / 72
Apr 6

Day 53: Axios vs. React Query, Which One Should You Use in Your React App?

When building a React app, fetching data is a must. But how do you do it efficiently? Two popular options are Axios and React Query. While both can get the job done, they serve different purposes. Let’s break it down for beginners. :rocket:

Axios: The Classic Choice :satellite:
Axios is a promise-based HTTP client that simplifies making requests to APIs. It’s widely used for sending GET, POST, PUT, DELETE requests and handling responses.

:white_check_mark: Why use Axios?
Simple API for making HTTP requests (like axios.get(url))
Supports request/response interception
Allows setting global headers (like authentication tokens)
Works with Node.js as well

:x: What it lacks:
No built-in caching or data synchronization
You need to manually manage loading, error states, and retries
Doesn’t handle automatic background refetching
Example usage:
import axios from ‘axios’;

const fetchData = async () => {
try {
const response = await axios.get(‘url’);
console.log(response.data);
} catch (error) {
console.error(error);
}
};

React Query: The Smart Choice? :robot:
React Query is a data-fetching and state management library that abstracts away the complexity of handling API requests. It makes working with server-side data more powerful and efficient.
:white_check_mark: Why use React Query?
Built-in caching :convenience_store: (reduces unnecessary API calls)
Auto-refetching when data becomes stale
Background updates (users always get fresh data)
Error handling & retries out of the box
Infinite scrolling & pagination support

:x: What it lacks:
Slightly steeper learning curve for beginners
Adds extra dependencies (though lightweight)
Might be overkill for simple projects
Example usage:
import { useQuery } from ‘react-query’;
import axios from ‘axios’;

const fetchData = async () => {
const { data } = await axios.get(‘LinkedIn’);
return data;
};

const MyComponent = () => {
const { data, isLoading, error } = useQuery(‘myData’, fetchData);

if (isLoading) return

Loading…

;
if (error) return

Error: {error.message}

;

return

{JSON.stringify(data)}
;
};

Which One Should You Use? :person_shrugging:
It depends on your needs! Here’s a quick comparison:
FeatureAxiosReact QuerySimple API Requests​:white_check_mark::white_check_mark:Global Headers​:white_check_mark::x: (but can be set via Axios)Caching​:x::white_check_mark:Auto-Refetching​:x::white_check_mark:Error HandlingManual​:white_check_mark: Built-inBackground Sync​:x::white_check_mark:Pagination Support​:x::white_check_mark:

:rocket: Use Axios if:
You just need to fetch data without advanced state management.
Your app is small, and you don’t need caching or auto-refetching.
You want full control over API requests and responses.

:zap: Use React Query if:
You want automatic caching, retries, and background refetching.
Your app relies on real-time or frequently updated data.
You need built-in pagination or infinite scrolling.

And as always, happy coding!

100daysofcode lebanon-mug

Day 54 | Understanding CORS: Why Your API Might Be Rejecting Requests

Ever tried to fetch data from an API, only to see a frustrating error in the console—something about “CORS policy” blocking your request? If you’re developing web applications, you’ve likely encountered this issue. Let’s break it down.

What is CORS?
Cross-Origin Resource Sharing (CORS) is a security mechanism implemented by web browsers that controls which domains can access resources on a server. By default, browsers enforce the Same-Origin Policy (SOP), which blocks requests between different origins to prevent malicious attacks.

:small_blue_diamond: Origin? In the web context, an “origin” is defined by three components:
Protocol (HTTP or HTTPS)
Domain (e.g., example.com)
Port (e.g., :3000 for development)

If any of these differ between the frontend and backend, the request is considered cross-origin and may be blocked unless explicitly allowed.

How CORS Works
When a browser sends a cross-origin request, it checks whether the server allows such requests by including special CORS headers in the response.

:white_check_mark: If the response contains Access-Control-Allow-Origin: * (or the specific requesting domain), the browser permits the request.
:x: Otherwise, the request is blocked, leading to the infamous CORS error.

Types of CORS Requests
:one: Simple Requests – Directly sent without a preflight check (e.g., GET requests without custom headers).
:two: Preflight Requests – When using POST, PUT, DELETE, or custom headers, the browser first sends an OPTIONS request to check if the actual request is allowed.

Fixing CORS Issues
If you’re dealing with CORS errors, here are some common solutions:
:small_blue_diamond: Server-Side Configuration: Modify the backend to allow cross-origin requests using appropriate headers, e.g.,
Access-Control-Allow-Origin: [your frontend url]
Access-Control-Allow-Methods: GET, POST, PUT
Access-Control-Allow-Headers: Content-Type

:small_blue_diamond: Use a Proxy: If modifying the backend isn’t an option, configure a proxy on your frontend (e.g., in webpack.config.js or API Gateway).

:small_blue_diamond: CORS Middleware: If using Express.js, add:
javascript
const cors = require(‘cors’);
app.use(cors({ origin: '[your frontend url] });

Final Thoughts
CORS isn’t a bug—it’s a security feature. Understanding how it works helps in designing secure, scalable web applications without unnecessary headaches.
Have you run into CORS issues before? Let’s discuss your workarounds in the comments! :rocket:

100daysofcode lebanon-mug

Day 55: RTK Query & Store.js: Simplifying API State Management in Redux Toolkit

Modern React applications need efficient data fetching and caching. Instead of manually managing API calls with useEffect and useState, RTK Query simplifies this by integrating API state directly into Redux Toolkit.

:small_blue_diamond: What is RTK Query?
RTK Query is a data-fetching and caching tool within Redux Toolkit. It automates caching, refetching, and state updates, making API interactions more efficient.

:small_blue_diamond: Configuring Redux Store (store.js)
To integrate RTK Query, the Redux store is configured with configureStore, where the API service is added to the reducers and middleware.
Example: The store includes authApi.reducerPath in the reducers and appends authApi.middleware to handle caching and async requests efficiently.

:small_blue_diamond: Defining an API Service
A typical RTK Query API service uses createApi, specifying a base URL and defining API endpoints. These endpoints can be queries (for fetching data) or mutations (for sending data).
For example, an authentication API service might have:
A mutation for logging in (loginUser) that sends a POST request.
A query for fetching user details (getUserProfile).
RTK Query automatically generates React hooks for these, such as useLoginUserMutation and useGetUserProfileQuery, making it easy to interact with APIs in components.

:small_blue_diamond: Using RTK Query in Components
Once the API service is set up, these hooks can be used in React components. Calling useLoginUserMutation triggers a login request, while useGetUserProfileQuery automatically fetches user data and handles loading and error states.

:small_blue_diamond: Why Use RTK Query?
Reduces Boilerplate: No need for managing state, useEffect, or additional API handling.

Automatic Caching & Re-fetching: Avoids redundant API calls and keeps data fresh.

Built-in Error Handling: Simplifies network request management.
By structuring the Redux store with RTK Query, applications become more scalable, maintainable, and performant, ensuring a better developer and user experience. :rocket:

100daysofcode lebanon-mug

:rocket: Day 56 | Mastering React Router: Navigating the Web Like a Pro

Building a seamless single-page application (SPA)? Then React Router should be your best friend. It’s the go-to library for managing navigation in React apps, ensuring users move between views effortlessly without full-page reloads.

But here’s the catch—many developers underutilize or misconfigure React Router, leading to sluggish performance, broken navigation, or confusing user experiences. Let’s break down some key insights:
:small_blue_diamond: React Router: The Core Features You Need to Know
:white_check_mark: Dynamic Routing: Unlike traditional static routing, React Router uses component-based routing, meaning routes are determined at runtime, adapting dynamically.
:white_check_mark: Nested Routes: Components can have child routes, keeping your UI structured and your code organized. Example:

jsx
<Route path=“/dashboard” element={}>
<Route path=“settings” element={} />

:white_check_mark: Protected Routes: Need authentication before accessing certain pages? Wrap your routes in a higher-order component (HOC) to check for user permissions.
:white_check_mark: URL Parameters & Query Strings: Fetch dynamic data based on URL parameters.
jsx
Copy code
<Route path=“/profile/:userId” element={} />
Then access userId using useParams().
:white_check_mark: Lazy Loading with Suspense: Speed matters. Load components only when needed using React.lazy().

jsx
const Dashboard = React.lazy(() => import(“./Dashboard”));

:zap: Common Mistakes That Slow You Down
:x: Using Hash Routing When You Don’t Need It
:point_right: Unless you’re building for legacy browsers, stick with BrowserRouter for clean URLs.
:x: Forgetting Instead of in v6+
:point_right: If you recently upgraded, remember Switch is deprecated—use Routes instead.
:x: Not Handling 404s Properly
:point_right: Always include a wildcard route for unmatched paths:

jsx
<Route path=“*” element={} />

:pushpin: The Bottom Line
React Router is more than just “links and paths”—it’s a powerful tool that can make or break user experience. Optimize routing logic, leverage performance boosters, and avoid common pitfalls to build apps that feel fluid and intuitive.

What are your biggest challenges with React Router? Drop your thoughts below! :point_down:

100daysofcode lebanon-mug

Day 57 | The Problem with Vibe Coding: Why Students Should Still Learn to Code :rocket:

There’s a growing narrative that students shouldn’t bother learning to code because AI can do it for them. This idea, often disguised as “vibe coding” (where people rely on AI-generated code without understanding how it works), is dangerous. The reasoning? Just as calculators perform math, kids don’t need to learn arithmetic, right? Wrong.

Learning to Code is Not About Typing Code

People misunderstand the purpose of coding education. It’s not about memorizing syntax—it’s about problem-solving, logic, and breaking down complex tasks. Great developers aren’t just code generators; they are problem solvers who understand system design, efficiency, and optimization.

Just because an AI can generate code snippets doesn’t mean it can build maintainable, scalable, and secure software on its own. If students rely on AI without understanding the underlying logic, they become copy-paste engineers, not actual software engineers.

The Future of Software Engineering is Changing—But Not in the Way You Think

Yes, AI is evolving. Yes, AI-assisted coding (GitHub Copilot, ChatGPT, etc.) is making development faster. But rather than replacing programmers, AI is augmenting them.

AI is great at: :white_check_mark: Generating boilerplate code :white_check_mark: Suggesting fixes for common errors :white_check_mark: Speeding up development workflows

But it’s terrible at: :x: Understanding the business logic behind software :x: Debugging complex, system-wide issues :x: Writing code that is reliable and secure without human oversight

Andrew Ng, one of the most respected AI researchers and professors, shares this viewpoint: AI isn’t replacing programmers—it’s making them 10x more effective. But to take advantage of this, developers need strong fundamentals.

Telling Students “Don’t Learn to Code” is Bad Advice

Imagine telling an aspiring writer not to learn grammar because spellcheck exists. Or telling a surgeon they don’t need anatomy because robotic assistants exist. It’s the same logic when people argue students shouldn’t learn to code.

Yes, AI-generated code is impressive. But without foundational knowledge, how do you know if that code is efficient, secure, and actually works?

The best engineers of the future won’t just know how to code. They’ll know how to think in code—leveraging AI as a tool, not a crutch.

The bottom line? Students should absolutely keep learning to code. But they should also learn how to code with AI, not instead of AI.

:bulb: What’s your take on AI-assisted coding? Is it making people better engineers or just making them dependent on AI? Let’s discuss! :point_down:

100daysofcode lebanon-mug

Day 58: :rocket: Mastering Backend Testing with Postman

One of the most powerful tools in a developer’s arsenal is Postman. It helps track down the source of issues by simulating API requests and verifying responses before integrating with the frontend. Let’s break down how Postman makes backend testing efficient and how to seamlessly transition from backend to frontend testing.

:hammer_and_wrench: How Postman Simplifies Backend Testing
Postman is more than just an API testing tool—it’s a comprehensive platform that can be used to test endpoints, validate responses, and debug issues. Here’s how it helps identify the root cause of bugs:

  1. Isolate Backend Issues Early
    By directly hitting backend APIs, you can determine whether an issue is rooted in the backend or arises from frontend integration. For instance, if an API call fails in Postman but works on the frontend, the issue likely lies in how the frontend handles the response.

  2. Test with Precision and Efficiency
    Instead of blindly navigating the entire system, use Postman to make precise API calls with various inputs and observe how the backend responds. This saves time and gives clarity on where the problem originates.

  3. End-to-End API Testing
    Postman supports testing complete user flows by chaining multiple API requests. This is particularly useful when simulating multi-step processes such as user registration and login.

:books: Best Practices for Effective Testing
To make your testing organized and effective, here are some essential practices:

  1. Organize Endpoints with Collections
    Group your endpoints into collections based on functionality:
    User Management Collection: Test all user-related endpoints like registration, login, and profile updates.

Product Management Collection: Group product CRUD operations.
Order Processing Collection: Include endpoints related to placing and tracking orders.

By structuring your collections this way, you’ll maintain clarity and make it easier to test related endpoints in bulk.

  1. Follow the Happy Path First :green_circle:
    Start by testing the happy path—the optimal scenario where everything works as intended. Once validated, move on to edge cases and negative testing to see how the system handles unexpected inputs.

  2. Use Environment Variables
    Instead of hardcoding URLs or credentials, use environment variables. This makes it simple to switch between development, staging, and production environments without manually editing every request.

  3. Automate with Test Scripts
    Postman’s scripting feature allows you to run assertions after every request. For instance:
    javascript
    pm.test(“Status code is 200”, function () {
    pm.response.to.have.status(200);
    });

This simple script checks that the response status is 200 OK and alerts you if it’s not, making error tracking more manageable.

  1. Document Your Tests
    Good documentation not only helps you but also your team. Add descriptions to your requests and collections to explain their purpose and how to use them.

100daysofcode lebanon-mug

:rocket: Day 59 | Understanding Different SWE Career Paths

Breaking into software engineering can be overwhelming, especially with roles like backend, frontend, full-stack, DevOps, mobile, and data engineering. Understanding the differences early on can set you up for success in interviews and unlock better opportunities.

:mag: Why Does It Matter? Imagine preparing for a backend engineering role by mastering frontend frameworks—sounds counterproductive, right? Knowing the differences helps you focus your learning and prep strategically. Each role demands unique skills and coding practices, and understanding them can save you time and effort.

:brain: Backend Engineers: Leetcode Kings Backend engineers handle logic and data management. They master algorithms and data structures (Leetcode is essential) and work with languages like Python, Java, or Go. System design and scalability are crucial skills. Focus on complex problem-solving and competitive programming.

:art: Frontend Engineers: UX Artists Frontend engineers bring visual elements to life and ensure smooth user interaction. They excel in HTML, CSS, JavaScript frameworks (React, Angular), and UI/UX design. Interviews often test dynamic interfaces and DOM manipulation.

:link: Full-Stack Engineers: Versatile Builders Full-stack engineers handle both frontend and backend tasks. They know frameworks like MERN or LAMP, API design, and integration. Interviews cover building full applications and bridging client-server logic.

:gear: DevOps Engineers: Deployment Experts DevOps engineers maintain smooth software deployment and CI/CD processes. They master tools like Docker, Jenkins, and cloud platforms (AWS, GCP). Be ready to discuss automated testing and server reliability.

:iphone: Mobile Engineers: App Builders Mobile engineers develop apps for iOS and Android using Swift, Kotlin, or cross-platform frameworks (Flutter, React Native). Key topics include performance optimization and hybrid vs. native approaches.

:dart: Final Thoughts Understanding software engineering roles early on gives you a major edge. Target your learning, build relevant projects, and practice role-specific interview questions. Master the right skills to become the software engineer you aspire to be! :muscle:

100daysofcode lebanon-mug

:rocket:Day 60 | Boost Performance Without Breaking the Bank: Server Optimization Tips for SWE

When your application starts slowing down, the knee-jerk reaction is to pay for more server space. But why not first squeeze every bit of performance out of your existing setup? Here’s a technical deep dive into some crucial optimizations that can make a world of difference before you open your wallet.

  1. :mag_right: Indexing Matters
    Poor indexing is one of the primary culprits behind sluggish database queries. Before investing in bigger servers, take a close look at your database schema:
    Create Indexes on Frequently Queried Fields: Identify columns that are frequently involved in WHERE, JOIN, and ORDER BY clauses.
    Composite Indexes: Combine multiple columns when they are commonly used together.
    Clustered vs. Non-Clustered: Choose wisely based on your data retrieval patterns.
    Regularly Update Statistics: Keep the query optimizer informed about data distribution.

  2. :vertical_traffic_light: Optimize Your Queries
    Even well-indexed databases can struggle if your queries are inefficient.
    Use Query Profiling Tools: PostgreSQL’s EXPLAIN, MySQL’s EXPLAIN ANALYZE, or SQL Server’s Query Analyzer are invaluable.
    Minimize Select Statements: Instead of SELECT *, specify only the columns you need.
    Avoid Subqueries When Possible: Use joins or common table expressions (CTEs) instead.
    Batch Your Updates: Instead of executing multiple small updates, combine them into one.
    Prepared Statements: Leverage them to improve performance and security.

  3. :bar_chart: Caching Is Your Best Friend
    If your application constantly retrieves the same data, you’re wasting server resources. Implement caching at multiple levels:
    In-Memory Caching: Use Redis or Memcached for rapid data retrieval.
    HTTP Caching: Leverage response caching and cache headers for static resources.
    Application-Level Caching: Cache expensive computations and frequently accessed data.
    Database Query Caching: Store the results of common queries.

  4. :gear: Efficient Data Storage
    Bloated databases can drastically slow performance. Keep your data lean and mean:
    Archive Old Data: Move less frequently accessed data to cheaper storage solutions.
    Partitioning: Split large tables to reduce the scanning effort.
    Data Compression: Compress old logs and rarely accessed datasets.
    Garbage Collection: Periodically clean up temporary tables and expired data.

  5. :computer: Load Balancing and Traffic Distribution
    If your application is highly trafficked, distributing the load can significantly improve performance:
    Reverse Proxying: Use NGINX or HAProxy to balance incoming requests.
    Horizontal Scaling: Add more instances rather than increasing individual server specs.
    Content Delivery Networks (CDN): Serve static assets from edge locations for quicker delivery.

Stay tuned for a part 2!

100daysofcode lebanon-mug

Day 61 | The Common Mistake of Newbies

When building web applications, many developers implement role-based access control (RBAC) to ensure that only authorized users can access certain pages or features. However, a common mistake among newbies is hiding protected routes only on the frontend without properly securing them on the backend. This creates a serious security vulnerability, allowing unauthorized users to bypass client-side restrictions and access sensitive data or functionality. Let’s explore why this happens, the risks involved, and how to fix it properly. :closed_lock_with_key:

The Mistake: Frontend-Only Protection :construction:
Many developers rely on frontend frameworks to handle routing and conditionally display content based on user roles. While this may seem like an effective way to restrict access, it only prevents unauthorized users from seeing certain UI elements. It does not actually prevent them from accessing restricted resources or performing actions directly on the backend.
The frontend is fully accessible to users, meaning they can inspect and modify it using developer tools. If the backend does not have proper role-based authorization, attackers can bypass frontend restrictions and make direct requests to restricted endpoints.

The Risks: Why This Is Dangerous :warning:
Direct API Access: Without backend authorization, an attacker can send requests to restricted endpoints using external tools, even if they cannot access them through the UI.

Client-Side Tampering: Since frontend logic runs in the browser, users can modify it to remove restrictions and gain unauthorized access.
Security Through Obscurity Is Not Security: Just because something is hidden on the frontend doesn’t mean it is protected. Sensitive operations remain exposed if the backend does not enforce security.
Unauthorized Data Exposure: If the backend does not validate user roles, attackers may access confidential data intended only for specific roles.

The Fix: Proper Backend Authorization :white_check_mark:
The correct way to secure protected routes is to enforce RBAC on the backend as well. This means ensuring that every request to sensitive resources is verified against user roles before granting access.

  1. Enforce Role-Based Authorization on Every Request :shield:
    The backend should check the user’s role before processing any request to protected resources. Simply preventing access through the UI is not enough.

  2. Use Centralized Role-Based Access Control :arrows_counterclockwise:
    Instead of implementing role validation separately for each action, a centralized mechanism should handle access control across the application. This reduces redundancy and ensures consistency in security policies.

  3. Secure API Calls with Authentication and Permissions :key:
    Authentication mechanisms such as token-based authentication should be used to verify user identity and role. This ensures that only authorized users can access certain functionalities, even if they attempt to bypass frontend restrictions.

100daysofcode lebanon-mug

Day 62 | :rocket: Kickstarting My SQL Series: The Foundation of Data Modeling with the ER Model!

Hey #LinkedInCommunity! :wave: Excited to launch my SQL series with a deep dive into one of the most fundamental concepts in database design—the Entity-Relationship (ER) Model! :bulb:

Where It All Began

The ER model was introduced in 1976 by the brilliant Professor Peter Chen (Chen Pin-Shan) :mortar_board:. His groundbreaking paper laid the foundation for how we visualize and structure data relationships today. Before ER, database design was more abstract—Chen gave us a clear, graphical way to map entities, attributes, and relationships.

Why Does the ER Model Matter? :thinking:

Visual Clarity :pencil2:: It turns complex data structures into easy-to-understand diagrams.

Blueprint for Databases :building_construction:: Serves as the first step before writing SQL schemas.

Improves Efficiency :zap:: Helps spot design flaws early, saving time and headaches later.

Key Components

:heavy_check_mark: Entities → Real-world objects (e.g., Customer, Product)

:heavy_check_mark: Attributes → Properties of entities (e.g., CustomerID, ProductName)

:heavy_check_mark: Relationships → How entities interact (e.g., Customer buys Product)

Stay tuned for Part 2, where we’ll translate ER diagrams into SQL tables! :card_index_dividers:

:speech_balloon: Discussion Time!

How has the ER model helped you in your projects?

Any database design challenges you’ve faced?

#sql 100daysofcode lebanon-mug

Day 63 |:mag: Mastering Weak Entity Sets, Cardinalities, Specialization & Generalization in Databases! :bulb:

Data modeling is the backbone of efficient databases, and understanding key Entity-Relationship (ER) model concepts is crucial. Let’s break down four essential concepts:

:small_blue_diamond: Weak Entity Sets
Not all entities are independent! Some lack a primary key and must rely on a strong entity through a relationship set. These are called weak entity sets and must have:
:white_check_mark: A discriminator (partial key) to differentiate among instances.
:white_check_mark: A total participation constraint, meaning they must be linked to a strong entity.
:white_check_mark: A supporting relationship, called an identifying relationship, that connects them to a strong entity.
:hammer_and_wrench: Example: Consider an Employee-Dependent relationship. A Dependent (child/spouse) doesn’t have a unique identifier but can be identified using Employee_ID + Dependent_Name. Without linking to an Employee, the Dependent entity has no existence in the database.

:small_blue_diamond: Cardinalities & Relationship Constraints
Cardinality defines how many instances of an entity can be related to another. Understanding this ensures data integrity and efficient queries.
:pushpin: Types of Cardinality:
:small_blue_diamond: One-to-One (1:1): Each entity in A maps to at most one entity in B.
:point_right: Example: A person has one passport, and each passport belongs to one person.
:small_blue_diamond: One-to-Many (1:M): An entity in A maps to multiple entities in B.
:point_right: Example: A manager supervises multiple employees, but each employee has only one manager.
:small_blue_diamond: Many-to-Many (M:N): Each entity in A relates to multiple entities in B, and vice versa.
:point_right: Example: Students enroll in multiple courses, and each course has multiple students.

:small_blue_diamond: Specialization vs. Generalization
As databases grow, organizing data becomes vital. This is where specialization and generalization help refine entity structures.
:small_blue_diamond: Specialization (Top-Down Approach):
We start with a generic entity and divide it into sub-entities based on distinguishing characteristics.
:point_right: Example: A Vehicle entity specializes into Car :red_car: and Bike :motorcycle:. Cars have a fuel type, while bikes may not.
:small_blue_diamond: Generalization (Bottom-Up Approach):
Here, we merge multiple similar entities into a higher-level super-entity to avoid redundancy.
:point_right: Example: A Doctor :man_health_worker: and a Nurse :woman_health_worker: share common attributes (e.g., Name, Salary), so they generalize into MedicalStaff.
:small_blue_diamond: Why Does This Matter?
:bulb: Weak entity sets prevent orphaned data in relationships.
:bulb: Cardinalities enforce integrity, ensuring logical data mapping.
:bulb: Specialization adds precision, while generalization reduces redundancy.

Mastering these concepts leads to better data normalization, query performance, and scalability! :rocket:

Which of these have you used in your projects? Drop your thoughts in the comments! :speech_balloon:

100daysofcode lebanon-mug

:rocket: Day 64: Understanding the Relational Model and Ensuring Consistency

Databases are everywhere, powering applications that range from social media platforms to financial systems. But how do we make sure that the data inside them is reliable, consistent, and useful? Today, we’re diving into the relational model, one of the most robust ways to manage structured data. We’ll also explore how to maintain database consistency—a critical aspect that ensures your data remains trustworthy.

:memo: Relational Model Basics

The relational model, introduced by E. F. Codd in 1970, revolutionized data storage by structuring data into tables (relations). It offers a logical way to represent data independently from the physical storage. Let’s break down the essential terms in both informal and mathematical contexts:

Informal Term Mathematical Term Example
Table Relation Student information
Row (record) Tuple (123, ‘John’, ‘CS’)
Column Attribute Student ID, Name, Major
Data Type Domain Integer, String

The relational model is fundamentally about relations (or tables), where each relation consists of tuples (or rows), each described by a set of attributes (or columns). The domain of an attribute defines its possible values, like numbers or strings.

:white_check_mark: Consistent vs. Inconsistent Databases

Let’s say you manage a database of student records. A consistent database might have a student with ID 123 listed as “John Smith” in both the Enrollment and Grades tables. An inconsistent database could show the same ID linked to two different names across these tables. Such inconsistencies are more than just nuisances—they can undermine trust and break system functionality.

:key: Maintaining Database Consistency

Maintaining consistency is crucial to ensure that data across tables and applications remains accurate and reliable. Here are some key techniques:

1. Normalization

Breaking data into smaller, non-redundant tables reduces the chances of inconsistency. Normalization ensures that one piece of information is stored only once.

2. Constraints

Apply integrity constraints like primary keys, foreign keys, and unique constraints to enforce data consistency. This means that records are uniquely identified and linked properly between tables.

3. ACID Properties

Database transactions should follow ACID (Atomicity, Consistency, Isolation, Durability) principles to ensure that operations either complete successfully or leave the database unchanged. This is especially important when performing multiple updates or batch operations.

4. Data Validation

Validate data both at the application level and database level. Use triggers or check constraints to automatically verify data before insertion or update.

5. Backup and Recovery

Maintaining consistency also involves periodic backups and implementing recovery mechanisms. This helps revert to a previous consistent state in case of system failures.

:construction: Common Pitfalls

  1. Circular References: Linking tables in a circular way can result in inconsistencies and difficulties in maintaining data integrity.
  2. Improper Use of NULL: Overuse or misuse of NULL values can lead to ambiguity in data interpretation.
  3. Schema Drift: Unplanned changes to the database schema can introduce inconsistencies.

:bulb: Key Takeaways

Maintaining a consistent database is not just about technical correctness but also about maintaining data integrity and trustworthiness. By understanding the relational model and implementing consistency practices, you can build reliable and scalable systems.

Got thoughts on relational models? Share your experiences or challenges in the comments below! :speech_balloon:

100daysofcode lebanon-mug

Day 65 | SQL Basics: Schema, Tables, Primary & Foreign Keys Explained! :bulb:

In the world of SQL, structuring your data efficiently is the key to building scalable and organized databases. Whether you’re just starting out or brushing up on your skills, understanding schemas, tables, primary keys, and foreign keys is crucial! Let’s dive into the essentials! :ocean:

:card_index_dividers: Schema

A schema is like a container for your database objects - think of it as an organizational unit! It holds tables, views, and other objects, allowing for better structure and access control.

Imagine you’re managing a company database. You might have a schema named CompanyDB to hold all related tables, such as employees, departments, and projects.

:key: Syntax:

sql

CREATE SCHEMA CompanyDB;

:bar_chart: Tables

Tables are where your data lives! They consist of rows and columns, each column having a specific data type (like INTEGER, VARCHAR, or DATE).

For example, in our company database, we might have:

Employee table for personal details

Dependent table for employee dependents

Department table for different departments

Project table for ongoing projects

WorksOn table to track which employee works on which project

:key: Syntax:

sql

CREATE TABLE Employee (

emp_id INT PRIMARY KEY, first_name VARCHAR(50), last_name VARCHAR(50), dept_id INT

);

:key: Primary Key

A primary key uniquely identifies each row in a table. No two rows can have the same primary key, ensuring data integrity.

In the Employee table, the emp_id column is the primary key, making each employee uniquely identifiable.

:key: Syntax:

sql

PRIMARY KEY (emp_id)

:link: Foreign Key

A foreign key creates a link between two tables. It enforces relationships and maintains referential integrity between data. For instance, the WorksOn table needs to link employees to projects.

:key: Syntax:

sql

CREATE TABLE WorksOn (

emp_id INT, proj_id INT, hours_worked DECIMAL(5,2), FOREIGN KEY (emp_id) REFERENCES Employee(emp_id), FOREIGN KEY (proj_id) REFERENCES Project(proj_id)

);

:memo: Example: Database Design

Here’s how the design fits together:

Employee table holds employee data.

Dependent table stores employee-dependent details.

Department table keeps department info.

DepLocation table tracks where departments are located.

Project table holds project details.

WorksOn table records which employees are assigned to which projects.

These tables are linked through primary and foreign keys, creating a robust relational database that minimizes redundancy! :link::sparkles:

:bulb: Why It Matters

Efficient database design helps maintain data consistency and integrity. By leveraging schemas and relational keys, you’re building a scalable structure that can grow with your data needs. :rocket:

Whether you’re a beginner or a pro, mastering these SQL fundamentals will set you up for success. Happy coding! :sunglasses:

100daysofcode lebanon-mug

Day 66 |:rocket: Overcoming Imposter Syndrome as a Junior Software Engineer :woman_technologist::man_technologist:

Let’s talk about something many of us in tech rarely say out loud: Imposter Syndrome. That nagging feeling that you’re not good enough, that you don’t belong, or worse — that someone’s going to “find out” you’re faking it. :performing_arts:

As a junior software engineer, I’ve felt it. You’ve probably felt it.

:mag: Why It’s So Common in Tech

Software engineering moves fast. Every day there’s a new framework, a new best practice, or a GitHub repo that makes you question your existence. :sweat_smile:

We work in environments that often reward output over learning, and it’s easy to feel like you’re falling behind — especially when:

:computer: You’re struggling to understand legacy code

:gear: You can’t fix a bug after 3 hours of trying

:brain: Everyone seems to “get it” except you

Spoiler: They don’t. They’re just googling faster.

:jigsaw: Imposter Syndrome ≠ Incompetence

In fact, feeling like an imposter is often a sign of growth. You’re challenging yourself. You’re out of your comfort zone — and that’s where learning happens.

But it can be paralyzing if left unchecked.

:hammer_and_wrench: What Actually Helps

Talk About It

Open up to teammates, mentors, or LinkedIn folks. You’ll be surprised how many people feel the same.

Join a CS Club or Tech Community :handshake:

Seriously, don’t go through this journey alone. Whether it’s a university CS club, a local dev group, or an online community — surrounding yourself with others who are learning, building, and sharing is game-changing. You’ll learn faster, feel less isolated, and stay inspired.

Track Your Progress :chart_with_upwards_trend:

Keep a “brag doc” — a list of bugs you fixed, features you shipped, or even concepts you understood better. Watch how far you’ve come!

Ask Questions — Fearlessly :question:

Asking doesn’t make you look dumb. It shows you’re engaged and willing to grow.

Stay Curious & Excited :star2:

Let your passion lead you. Tinker with side projects, explore new tech, attend hackathons — curiosity is your superpower. The more excited you stay, the faster the fear fades.

Don’t Compare Your Chapter 1 to Someone Else’s Chapter 20 :books:

Your senior might seem like a wizard now, but they were once stuck on their first null pointer exception too.

Practice Self-Compassion :person_in_lotus_position:

Tech is hard. You’re not supposed to know everything. You’re supposed to learn.

:brain: Final Thought

If you’re a junior SWE battling self-doubt, just remember:

You’re not an imposter. You’re a beginner. And beginners grow.

Stay curious. Stay excited. Keep learning. You’ve got this. :rocket::heart_on_fire:

100daysofcode lebanon-mug

:brain: Day 67 | Why Mastering DSA Still Matters in 2025

Whether you’re building scalable systems, optimizing a product, or cracking the next coding interview — Data Structures & Algorithms (DSA) remain foundational to success in tech. Yet many overlook it, dismissing it as “just interview prep” or “academic theory.” That mindset is costing developers long-term growth.
Here’s why DSA still matters — and why I’m starting this new series:
:small_blue_diamond: Problem Solving Muscle
At its core, DSA trains your brain to approach challenges methodically. It’s like going to the gym for your logic — improving how you think, not just what you code.
:small_blue_diamond: Real-World Optimization
Understanding time and space complexity isn’t just for whiteboard interviews. It translates directly to efficient code in real-life applications:
• A faster search algorithm in an e-commerce backend
• Memory-optimized data handling for edge devices
• Reduced latency in real-time systems
:small_blue_diamond: Systems Design Foundation
Before you scale, you simplify. Before you scale, you structure. Knowing when to use a hash map over a tree or why a heap can outperform a queue — that’s DSA in action. It gives you the building blocks that enable high-level design.
:small_blue_diamond: Interview-Ready, Yes — But Career-Ready Too
Mastering DSA isn’t just about passing tech interviews. It’s about having confidence when faced with ambiguous problems and being able to engineer clean, efficient, maintainable solutions.

:rocket: What’s Next?
In the coming weeks, I’ll be diving into practical DSA topics:
:white_check_mark: Hidden performance traps in common algorithms
:white_check_mark: Real-world DSA use cases from backend, frontend, and data
:white_check_mark: Simplifying complex topics like recursion, graphs, and dynamic programming

100daysofcode lebanon-mug

Day 68 | :mag: Real-World Use Cases of DSA You’re Already Relying On

If you’ve ever asked yourself “When am I really going to use Data Structures and Algorithms?” — you’re not alone. For many professionals outside of competitive programming or academia, DSA can seem abstract. But the truth is, behind nearly every system we interact with, there’s a carefully crafted blend of DSA at work.

Here are 5 practical, real-world cases where DSA isn’t just useful — it’s essential:

  1. Navigation Apps: Graphs in Action

Whether it’s Google Maps calculating the fastest route :red_car: or Uber matching you with the nearest driver, graphs and shortest path algorithms (like Dijkstra’s) are the stars here. Roads become nodes and distances become weights. Efficient route planning = happy users.

  1. Autocorrect & Search Suggestions: Tries & Heuristics

Typing “algor…” and getting “algorithm” as a suggestion? That’s not magic — it’s a Trie (prefix tree) behind the scenes. Combined with frequency data and edit distance algorithms (like Levenshtein), it helps your phone guess what you meant — even with typos.

  1. Databases & Indexing: Trees and Hashing

Ever wondered how SQL queries return results in milliseconds? Thank B-trees and hash maps. Indexing structures speed up search and retrieval, especially in large datasets. Without them, your queries would take ages.

  1. Social Media Feeds: Heaps, Queues & Graphs

Your Instagram or LinkedIn feed? Not random. It’s sorted based on relevance, engagement, or recency using priority queues, heaps, and sometimes even graph-based algorithms to suggest content from your extended network.

  1. Cybersecurity & Network Routing: Queues, Trees, Graphs

Firewalls and routing protocols often rely on tries and prefix trees for IP matching. Queues and graphs are essential in packet routing and network traffic optimization, ensuring data gets from point A to point B without chaos.

:bulb: TL;DR: DSA isn’t just academic — it’s operational. From your morning scroll to your late-night food delivery, algorithms are quietly at work. Mastering them isn’t about passing a coding test — it’s about understanding the systems that run the digital world.

100daysofcode lebanon-mug

:repeat: Day 69 | Understanding Recursion in DSA (Beginner Friendly)

When you’re just starting your journey into Data Structures and Algorithms (DSA), one of the first powerful tools you’ll encounter is recursion. At first, it might seem confusing — functions calling themselves? But once you understand it, recursion becomes an elegant way to solve many complex problems with concise code.

:seedling: What is Recursion?

Recursion is a method where the solution to a problem depends on solving smaller instances of the same problem.

In simple terms, a recursive function is one that calls itself.

To prevent infinite loops, every recursive function must have:

Base Case: The condition that stops the recursion.

Recursive Case: The part where the function calls itself with a smaller input.

:brain: Example: Factorial

python

def factorial(n):

if n == 0: # Base case return 1 return n * factorial(n - 1) # Recursive call

Calling factorial(5) will result in:

5 * 4 * 3 * 2 * 1 = 120

:gear: How Recursion Works (Call Stack)

Each recursive call is pushed onto the call stack, and the function resumes after each inner call returns. This is why understanding stack memory is important — too many calls can lead to a stack overflow.

:cyclone: Visualization

ruby

CopyEdit

factorial(3)

=> 3 * factorial(2)

=> 2 * factorial(1) => 1 * factorial(0) => 1

As the calls resolve, they “unwind” from the stack.

:jigsaw: When to Use Recursion

Recursion is ideal when:

A problem can be broken down into similar subproblems.

The problem naturally fits a divide-and-conquer approach.

The problem involves tree/graph traversal, permutations, or backtracking.

But keep in mind: recursion isn’t always efficient. Sometimes an iterative solution is better due to space concerns.

:bulb: Tips to Master Recursion

Write down the base case before the recursive logic.

Use print statements to trace recursive calls.

Practice dry-running your function on paper.

Learn to convert recursive logic to iterative, and vice versa.

Final Thoughts

Recursion is a pillar of problem-solving in computer science. It helps you think recursively, which is essential for topics like divide and conquer, trees, and backtracking. Start with easy problems and gradually build up to more complex challenges. And most importantly, don’t be afraid of recursion — embrace it!

100daysofcode lebanon-mug

Day 70: Mastering Recursion – From Basics to Brilliance :rocket:

Recursion can be intimidating at first, but once it clicks, it becomes one of the most elegant and powerful tools in your coding arsenal. Whether you’re prepping for FAANG interviews or sharpening your CS fundamentals, mastering recursion is a must.

I’m sharing a curated list of Top Recursion Problems on LeetCode – sorted from beginner to advanced – that helped me build confidence and intuition. Let’s dive in! :brain:

:beginner: Beginner Level – Build the Foundation

Start here to grasp the core concept: a function calling itself with a smaller input, and a base case to stop recursion.

Factorial of N – Implement the classic definition

:jigsaw: LeetCode Recursion Basics

(Not a specific problem, but use this page to practice basics like factorial, power, and sum of array recursively)

Reverse a String

:jigsaw: 344. Reverse String

Fibonacci Number (Top-down vs. Bottom-up)

:jigsaw: 509. Fibonacci Number

:gear: Intermediate Level – Brute Force to Backtracking

Here’s where recursion gets interesting. You’ll encounter decision trees and multiple recursive calls.

Permutations

:jigsaw: 46. Permutations

Subsets (Power Set)

:jigsaw: 78. Subsets

Generate Parentheses – Backtracking meets recursion

:jigsaw: 22. Generate Parentheses

Letter Combinations of a Phone Number

:jigsaw: 17. Letter Combinations of a Phone Number

:fire: Advanced Level – Recursion with Pruning & State Tracking

Here we combine recursion with optimization tricks (memoization, state flags, etc.)

Word Search

:jigsaw: 79. Word Search

N-Queens – Classic CS problem

:jigsaw: 51. N-Queens

Palindrome Partitioning

:jigsaw: 131. Palindrome Partitioning

Restore IP Addresses

:jigsaw: 93. Restore IP Addresses

Expression Add Operators – Deep backtracking with arithmetic

:jigsaw: 282. Expression Add Operators

:brain: Bonus: Visual Tools to Learn Recursion

Use a debugger or draw recursion trees on paper.

Try Python Tutor: http://pythontutor.com/

Memoize and compare brute force vs. optimized versions.

:speech_balloon: Final Thoughts

Recursion isn’t just a topic — it’s a mindset. Tackling it daily rewires your brain to think declaratively. And as always, happy coding!

100daysofcode lebanon-mug

:computer: Day 71 of hashtag#100DaysOfCode – Arrays in Java with a Real-World Twist :rocket:

Today, I will discuss the classic yet powerful data structure: arrays in Java. But instead of just printing numbers, let’s do something a bit more interesting.

:brain: Quick Recap: What’s an Array?
An array is a fixed-size, indexed data structure that holds elements of the same type. It’s fast, predictable, and memory-efficient — making it a go-to tool in low-level operations and algorithm-heavy scenarios.

:sparkles: Real-World Use Case: Detecting Duplicate Votes in a Poll
Say we’re building a small voting system for a student council election. Each student can vote once, but what if someone tries to vote twice?
We’ll simulate a system that flags duplicate voter IDs using a simple array approach.

:pushpin: Step 1: Simulate Voter IDs
java
CopyEdit
int voterIds = {102, 205, 102, 310, 412, 205}; // Duplicate IDs: 102 and 205

:mag: Step 2: Detect Duplicates Using Nested Loops
java
CopyEdit
for (int i = 0; i < voterIds.length; i++) {
for (int j = i + 1; j < voterIds.length; j++) {
if (voterIds[i] == voterIds[j]) {
System.out.println("Duplicate vote detected from ID: " + voterIds[i]);
}
}
}
:jigsaw: Output:
csharp
CopyEdit
Duplicate vote detected from ID: 102
Duplicate vote detected from ID: 205
This is a brute-force approach (O(n²) time), but it’s a great demonstration of how arrays can power logic in real-world-like systems — even before you move into fancy data structures.

:bulb: Why Arrays Still Matter
Despite being “basic,” arrays are the foundation for:
Efficient memory handling
Fixed-size buffers
Algorithm challenges (sorting, searching, etc.)
Under-the-hood operations in Java collections like ArrayList

Day 72: Mastering the Two Pointer Technique in Arrays: A Must-Know DSA Pattern

If you’ve just moved past recursion in your DSA journey, it’s time to unlock one of the most elegant and efficient techniques used in array and string problems: the Two Pointer Technique.

It’s simple in concept, yet incredibly powerful when applied right. Let’s break it down with a classic problem that’s often asked in interviews.

:mag: The Problem: Two Sum in a Sorted Array

Given a sorted array of integers nums and a target integer target, return the 1-based indices of the two numbers that add up to the target.

You can assume exactly one solution exists.

:x: Brute Force (O(n²))

A straightforward approach is to use a nested loop and check every pair. It works, but it’s inefficient—especially when input size grows. Interviewers usually look for more optimized thinking.

:white_check_mark: Optimized with Two Pointers (O(n))

Here’s where the two pointer technique shines. Since the array is sorted, we use two indices:

left starting at the beginning

right starting at the end

At each step:

If nums[left] + nums[right] == target → return the pair

If the sum is too small → move left rightward

If the sum is too big → move right leftward

python

CopyEdit

def twoSum(nums, target):

left, right = 0, len(nums) - 1 while left < right: curr_sum = nums[left] + nums[right] if curr_sum == target: return [left + 1, right + 1] elif curr_sum < target: left += 1 else: right -= 1

:bulb: Why this works: We’re leveraging the sorted nature of the array to discard irrelevant combinations in constant time. No extra space needed. Clean and efficient.

:repeat: Where Else Can You Use Two Pointers?

Reversing an array in place

Checking for palindromes

Removing duplicates from a sorted array

Merging two sorted arrays

Trapping rain water (advanced)

:brain: When to Reach for This Technique

The data is sorted

You need to find pairs, windows, or symmetrical properties

You want to optimize space and time complexity

Final Thoughts

Two pointers teach you to reason from both ends—a skill that’s algorithmic and deeply strategic. It’s a go-to tool for coding interviews and competitive programming alike.

If you’ve already mastered recursion, two pointers is your next must-conquer concept. It’s intuitive, versatile, and essential in real-world code.

:pushpin: Next step? Try applying this to problems like “Container With Most Water” or “3Sum”. You’ll quickly see just how far this pattern can take you.

Let me know what your favorite two-pointer problem is—or if you’ve seen an interview question where this approach saved the day!

#DSA #CodingInterviews #Arrays #TwoPointers #TechBlog python #ProgrammingTips

This topic will automatically close in 3 months.