OSI Model Capsulized
The Open Systems Interconnection (OSI) model is prioritized as one of the must knows when it comes to diving into cloud systems, network troubleshooting and cybersecurity mechanisms. Let's dive in!
First developed in 1978 by French software engineer and pioneer, Hubert Zimmermann, The OSI Model has become widely adopted by all major computer and telecommunication companies since its inception in 1984. It belongs to the International Organization for Standards (ISO) and its identification is ISO/IEC 7498–1.
Backbone of Wired Ethernet Networking
RJ45, NIC, and LAN Cable are all components commonly used in computer networking:
RJ45: RJ45 stands for Registered Jack 45. It is a type of connector commonly used for Ethernet networking. RJ45 connectors resemble telephone connectors (RJ11), but they are larger and have eight pins. These connectors are used to connect network cables to network devices such as computers, routers, switches, and Ethernet ports on walls. RJ45 connectors are standard in Ethernet networking and are used with twisted pair cables.
NIC: NIC stands for Network Interface Card. It's also commonly referred to as a network adapter or Ethernet adapter. A NIC is a hardware component that allows a computer to connect to a network. It is usually installed internally in a computer and provides the necessary interface between the computer and the network cable. NICs come in various forms, including PCI cards for desktop computers and integrated NICs built into laptops and motherboards. They typically have an RJ45 port for connecting to an Ethernet network.
LAN Cable: LAN stands for Local Area Network. A LAN cable, also known as Ethernet cable or network cable, is the physical cable used to connect devices within a local area network. LAN cables typically use twisted pair copper wires to transmit data and come in various categories, such as Cat5e, Cat6, and Cat6a, each with different specifications for data transmission speed and quality. These cables are terminated with RJ45 connectors at each end to connect devices to a network
Why was OSI Model Introduced?
The OSI (Open Systems Interconnection) model was introduced to standardize and conceptualize the various functions involved in computer networking. Here are several reasons why the OSI model was introduced:
Standardization: Prior to the OSI model, there was no universally accepted framework for understanding and discussing networking concepts. The OSI model provided a standardized way to conceptualize network architecture, making it easier for different vendors and organizations to communicate and collaborate on networking protocols and technologies.
Interoperability: With the increasing complexity of networking technologies and the proliferation of different networking protocols, there was a growing need for interoperability between different systems and devices from various vendors. The OSI model provided a common reference framework that allowed vendors to develop compatible networking equipment and protocols.
Education and Training: The OSI model serves as a valuable educational tool for teaching networking concepts. By dividing network functionality into seven distinct layers, each with its own set of responsibilities, the OSI model simplifies the understanding of complex networking principles and protocols.
Troubleshooting and Debugging: The OSI model provides a structured approach to troubleshooting and debugging network issues. By breaking down network communication into discrete layers, network administrators can isolate problems more effectively and determine which layer is experiencing issues.
Protocol Development and Evolution: The OSI model has facilitated the development and evolution of networking protocols by providing a framework for organizing and categorizing them. This has helped guide the development of new protocols and the refinement of existing ones to better meet the needs of evolving networking environments.
The OSI model was introduced to bring order and standardization to the field of computer networking, providing a common language and framework for discussing, designing, implementing, and troubleshooting network architectures and protocols.
Layers of the OSI Mode
GIF Courtesy: https://medium.com/@sreekanth.thummala
The OSI (Open Systems Interconnection) model is a conceptual framework that divides network communication into seven distinct layers. Each layer has specific functions and responsibilities, and they work together to facilitate communication between devices on a network. Here's a detailed explanation of each layer:
Physical Layer (Layer 1):
The Physical Layer deals with the physical aspects of transmitting data over a network medium. It defines the electrical, mechanical, and procedural specifications for transmitting raw data signals between devices.
Functions include transmitting and receiving raw bit streams over a physical medium, such as copper wires, fiber optic cables, or wireless transmissions.
It establishes the physical connection between devices and manages characteristics such as voltage levels, cable types, connectors, and signaling.
Data Link Layer (Layer 2):
The Data Link Layer is responsible for node-to-node communication and provides the means for error detection and correction in the physical layer.
It organizes data into frames and adds necessary addressing and error-checking information.
Functions include framing, addressing, error detection, flow control, and access control (e.g., Ethernet, Wi-Fi MAC addressing).
Bridges and switches operate at this layer, forwarding data based on MAC addresses.
Network Layer (Layer 3):
The Network Layer is responsible for routing packets across multiple networks and providing logical addressing.
It determines the best path for data to travel from the source to the destination based on network conditions, addressing, and routing protocols.
Functions include logical addressing (e.g., IP addresses), routing, traffic management, and packet forwarding.
Routers operate at this layer, making forwarding decisions based on IP addresses.
Transport Layer (Layer 4):
The Transport Layer ensures reliable end-to-end communication between hosts and provides error recovery and flow control mechanisms.
It segments, reassembles, and manages data flows between source and destination hosts.
Functions include segmentation, reassembly, connection establishment, flow control, error recovery (e.g., TCP), and multiplexing (identifying different application data streams).
TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) operate at this layer.
Session Layer (Layer 5):
The Session Layer establishes, manages, and terminates communication sessions between applications running on different devices.
It provides synchronization, checkpointing, and session recovery mechanisms.
Functions include session establishment, maintenance, and termination, as well as dialog control and synchronization.
Examples include API (Application Programming Interface) functions for establishing and managing sessions.
Presentation Layer (Layer 6):
The Presentation Layer ensures that data exchanged between applications is formatted, presented, and interpreted correctly.
It handles data encryption, compression, and conversion between different data formats.
Functions include data encryption, data compression, data conversion (e.g., ASCII to EBCDIC), and syntax translation.
Examples include encryption algorithms, image file formats, and character encoding standards.
Application Layer (Layer 7):
The Application Layer provides network services directly to end-users and applications.
It enables communication between different applications and supports various network protocols.
Functions include providing network services to end-users, supporting application protocols (e.g., HTTP, SMTP, FTP), and handling user authentication and access control.
Examples include web browsers, email clients, file transfer programs, and remote desktop applications.
Each layer of the OSI model performs specific functions, and together they form a hierarchical framework for understanding and designing network architectures and protocols.
FTP, HTTP/S, SMTP & Telnet’s place in the Application Layer
Each of these protocols serves specific purposes in computer networking:
FTP (File Transfer Protocol):
Application: FTP is primarily used for transferring files between a client and a server over a TCP/IP network.
Application Scenarios:
Uploading and downloading files to and from a remote server, such as a web server.
Sharing files between computers on a local network or over the internet.
Managing files on a remote server, such as updating website content or transferring large datasets.
HTTP/S (Hypertext Transfer Protocol/Secure):
Application: HTTP and its secure counterpart HTTPS are protocols used for transmitting hypertext documents, such as web pages, over the internet.
Application Scenarios:
Accessing websites: HTTP/S is used by web browsers to request and retrieve web pages, images, videos, and other resources from web servers.
Web-based applications: Many online services and applications, such as email clients, social media platforms, and online shopping sites, use HTTP/S for communication between clients and servers.
Secure transactions: HTTPS encrypts data transmission, providing security for sensitive information such as login credentials, personal data, and financial transactions.
SMTP (Simple Mail Transfer Protocol):
Application: SMTP is a protocol used for sending email messages between email servers.
Application Scenarios:
Sending emails: SMTP is used by email clients to send messages to an email server for delivery to the recipient's mailbox.
Mail routing: SMTP servers use SMTP to exchange email messages with other servers, forwarding messages to their destination based on recipient addresses.
Email administration: SMTP is used for managing email accounts, configuring email servers, and diagnosing email delivery issues.
Telnet:
Application: Telnet is a protocol used for remote terminal access and management of network devices.
Application Scenarios:
Remote administration: Telnet allows users to remotely access and manage network devices, such as routers, switches, and servers, using a command-line interface.
Troubleshooting: Network administrators use Telnet to diagnose and troubleshoot connectivity issues, configure device settings, and perform maintenance tasks.
Legacy applications: Telnet is used to access legacy systems and equipment that lack modern management interfaces, providing a way to interact with them remotely.
Translation, Data Compression & Encryption Methodologies of the Presentation Layer
Here's an overview of the methodologies employed in the Presentation Layer:
Translation:
Data translation involves converting data from one format to another to ensure compatibility between different systems or applications. This can include:
Character encoding conversion: Converting text data from one character encoding scheme to another (e.g., ASCII to Unicode).
Data format conversion: Transforming data from one format to another (e.g., converting image files between JPEG and PNG formats).
Protocol conversion: Adapting data to be compatible with different communication protocols (e.g., converting between HTTP and FTP protocols).
Translation ensures that data exchanged between applications can be properly understood and processed, regardless of the systems or formats involved.
Data Compression:
Data compression reduces the size of data for efficient storage and transmission over a network. It is particularly useful for optimizing bandwidth usage and reducing storage requirements. Compression techniques include:
Lossless compression: Algorithms that reduce data size without losing any information. Examples include Huffman coding, Lempel-Ziv-Welch (LZW) compression, and Deflate compression (used in ZIP files).
Lossy compression: Techniques that sacrifice some data quality to achieve higher compression ratios. Commonly used in multimedia compression, such as JPEG for images and MP3 for audio.
Data compression helps minimize network congestion, decrease transmission times, and conserve storage space.
Encryption:
Encryption involves encoding data to prevent unauthorized access or interception. It ensures data confidentiality and privacy by scrambling plaintext into ciphertext using cryptographic algorithms and keys. Encryption methodologies include:
Symmetric encryption: Uses a single shared key for both encryption and decryption. Examples include AES (Advanced Encryption Standard) and DES (Data Encryption Standard).
Asymmetric encryption: Utilizes a pair of keys, a public key for encryption and a private key for decryption. RSA (Rivest-Shamir-Adleman) and Elliptic Curve Cryptography (ECC) are common asymmetric encryption algorithms.
Hashing: Converts data into a fixed-size hash value, often used for data integrity verification. Algorithms like SHA-256 and MD5 generate unique hash values for input data.
Encryption safeguards sensitive information during transmission and storage, ensuring that only authorized parties can access and decipher the data.
These methodologies employed in the Presentation Layer play a crucial role in ensuring the integrity, security, and interoperability of data exchanged between applications in a networked environment.
APIs role in the Session Layer
APIs (Application Programming Interfaces) play a crucial role in facilitating communication at the Session Layer level, although they are not strictly confined to this layer. Here's how APIs contribute to the Session Layer:
Session Establishment:
APIs provide a set of functions and protocols that allow applications to establish communication sessions with other applications or services. These APIs typically handle tasks such as initiating connections, negotiating session parameters, and establishing session identifiers.
For example, in a client-server architecture, an application may use APIs provided by the operating system or network libraries to establish a connection with a remote server and initiate a session for data exchange.
Session Management:
APIs assist in managing communication sessions by providing functions for session maintenance, synchronization, and control. These APIs enable applications to monitor session status, handle timeouts and interruptions, and synchronize data exchange between communicating entities.
For instance, APIs may include functions for setting session timeouts, resuming interrupted sessions, or synchronizing data transmission between client and server applications.
Session Termination:
APIs facilitate the orderly termination of communication sessions by providing functions for closing connections, releasing resources, and performing cleanup tasks. These APIs ensure that sessions are terminated gracefully, allowing applications to free up resources and release network resources.
When an application has finished exchanging data with another application, it can use session termination APIs to close the connection and release any allocated session-related resources.
Error Handling and Recovery:
APIs may also include error handling and recovery mechanisms to manage session-related errors and failures. These APIs enable applications to detect and handle session errors, recover from communication failures, and maintain data integrity during session operations.
For example, APIs may provide functions for detecting network errors, retransmitting lost data packets, or implementing error correction protocols to ensure reliable session communication.
Transport Layer Functions
Segmentation:
Segmentation involves breaking down data from the Application Layer into smaller units called segments before transmission over the network. This is necessary because data from higher layers may be too large to fit into the maximum transmission unit (MTU) of the underlying network technology.
The Transport Layer segments the data and adds a header to each segment containing information such as sequence numbers, source and destination port numbers, and checksums for error detection.
Segmentation allows large amounts of data to be transmitted more efficiently across the network and ensures that data can be reassembled correctly at the receiving end.
Flow Control:
Flow control is a mechanism used to manage the rate of data transmission between sender and receiver to prevent data loss and network congestion. It ensures that the sender does not overwhelm the receiver with data faster than it can process.
The Transport Layer uses flow control techniques to regulate the flow of data by controlling the amount of data sent and acknowledging received data. This is typically achieved through techniques such as sliding window protocols.
Flow control mechanisms prevent buffer overflow, reduce the risk of packet loss, and maintain optimal network performance by matching the sender's transmission rate to the receiver's processing capabilities.
Error Control:
Error control ensures the integrity and reliability of data transmission by detecting and correcting errors that may occur during transmission over unreliable network channels. It involves methods for detecting errors, requesting retransmissions of corrupted or lost data, and ensuring data integrity.
The Transport Layer employs error control mechanisms such as checksums, acknowledgments, and retransmissions to detect and recover from transmission errors.
When a segment is received, the receiver calculates a checksum based on the segment's contents and compares it to the checksum value included in the segment's header. If the checksums do not match, it indicates that the segment has been corrupted during transmission, and the receiver requests a retransmission from the sender.
Error control mechanisms help ensure that data is delivered accurately and reliably across the network, even in the presence of transmission errors or network congestion.
These mechanisms help optimize network performance, prevent data loss, and maintain data integrity during transmission.
TCP & UDP
How does TCP work?
TCP works by using a “three-way handshake” — a three-step process that forms a connection between a device and a server. The completion of the three-step process establishes the non-stop connection, starts the transfer of data packets across the internet, delivers them intact, and acknowledges delivery.
Here’s how TCP works:
The client device initiating the data transfer sends a sequence number (SYN) to the server. It tells the server the number that the data packet transfer should begin with.
The server acknowledges the client SYN and sends its own SYN number. This step is often referred to as SYN-ACK (SYN acknowledgement).
The client then acknowledges (ACK) the server’s SYN-ACK, which forms a direct connection and begins the data transfer.
The connection between the sender and receiver is maintained until the transfer is successful. Every time a data packet is sent, it requires an acknowledgment from the receiver. So, if no acknowledgment is received, the data is resent.
If an error is acknowledged, the faulty packet is discarded and the sender delivers a new one. Heavy traffic or other issues may also prevent data from being sent. In that case, the transmission is delayed (without breaking the connection).Thanks to these controls, successful data delivery is guaranteed with TCP.
TCP uses a three-step process that forms (and keeps) a connection between a device and a server.
How does UDP work?
The UDP protocol works by immediately firing data at the receiver who made a data transmission request, until the transmission is complete or terminated. Sometimes called a “fire-and-forget” protocol, UDP fires data at a recipient in no particular sequence, without confirming delivery or checking if packets arrived as intended.
While TCP establishes a formal connection via its “handshake” agreement before sending data. UDP doesn’t have time for that. It speeds up data transfer by sending packets without making any agreement with a receiver. Then, it’s up to the recipient to make sense of the data.
UDP works by rapid-firing data from sender to receiver until the transfer is completed or terminated.
Here’s an analogy to help you understand how TCP and UDP work:
Imagine you’re having lunch at the office and a friend in a different cubicle asks you for half of your sandwich. You have two options: You can walk through the maze of office desks and hand it to her, guaranteeing a secure delivery. Or, you can throw the sandwich into her cubicle from across the room, leaving the quality of the delivery up to her speed and reflexes.
The first method (TCP) is reliable, but slow. The second method (UDP) is fast, but the sandwich might not arrive in its original state — or at all.
Note: Connection-Oriented Transmission is done via TCP and Connectionless Transmission is done via UDP.
What are three differences between TCP and UDP?
TCP requires a reliable connection between server and recipient, which can slow down data transfer. UDP is a connectionless protocol, therefore much quicker.
TCP guarantees flawless data delivery, even if lost or damaged packets are retransmitted. UDP is a “fire-and-forget” protocol that won’t check for errors or resend lost data packets.
UDP is better for broadcasting and live streaming. TCP is better for direct communication, like email, web browsing, or transferring files.
Taken from an excerpt by “Ben Gorman” on “TCP vs UDP: What’s the Difference and Which Protocol Is Better?” which was published on “February 23, 2023”
Logical Addressing, Routing and Path determination
In the OSI model, the Network Layer (Layer 3) is responsible for routing data packets across multiple networks, regardless of the underlying physical infrastructure. This layer facilitates communication between devices on different networks by providing logical addressing, routing, and path determination. Here's an overview of each of these functions:
Logical Addressing:
Logical addressing involves assigning unique addresses to devices on a network to identify them at the network layer. Unlike physical addresses (e.g., MAC addresses) used at the Data Link Layer, logical addresses are hierarchical and independent of the underlying physical network topology.
The most common form of logical addressing is IP (Internet Protocol) addressing, where each device on a network is assigned a unique IP address. IP addresses are structured as binary numbers, typically expressed in dotted-decimal notation (e.g., IPv4 addresses like 192.168.1.1 or IPv6 addresses like 2001:0db8:85a3:0000:0000:8a2e:0370:7334).
Logical addressing allows routers to identify the source and destination of data packets and make routing decisions based on network addresses, enabling communication between devices on different networks.
Routing:
Routing is the process of determining the best path for data packets to travel from the source to the destination across interconnected networks. It involves making forwarding decisions based on routing tables, which contain information about network topology, available paths, and destination addresses.
Routers are network devices that operate at the Network Layer and are responsible for routing data packets between networks. They examine the destination IP address of incoming packets, consult their routing tables, and forward packets to the next hop along the optimal path towards the destination.
Routing protocols such as RIP (Routing Information Protocol), OSPF (Open Shortest Path First), BGP (Border Gateway Protocol), and EIGRP (Enhanced Interior Gateway Routing Protocol) are used to exchange routing information between routers, update routing tables dynamically, and determine the most efficient paths for data transmission.
Path Determination:
Path determination involves selecting the best route for data packets to reach their destination based on various factors such as network topology, link quality, congestion levels, and administrative policies.
Routers use routing algorithms and metrics to calculate the optimal path for data transmission. These algorithms consider factors such as hop count, bandwidth, delay, reliability, and cost to determine the most suitable path.
Path determination ensures efficient and reliable data transmission by selecting paths that minimize latency, maximize throughput, and avoid network congestion and failures.
These functions enable data packets to be routed across interconnected networks based on destination addresses, ensuring end-to-end connectivity and efficient data transmission in complex network environments.
“Logical addressing” is dealt by Network Layer Whereas “Physical addressing” is dealt by Data Link Layer. IP (Internet Protocol) addresses are the most common form of logical addressing used in networking. IPv4 addresses (e.g., 192.168.1.1) and IPv6 addresses (e.g., 2001:0db8:85a3:0000:0000:8a2e:0370:7334) are used to uniquely identify devices on an IP network. MAC (Media Access Control) addresses are the most common form of physical addressing used in networking. MAC addresses are unique identifiers assigned to network interface controllers (NICs) by manufacturers.
Framing, Media Access Control and Error Detection in Layer 2
In the OSI model, the Data Link Layer (Layer 2) is responsible for transmitting data frames between adjacent network nodes over a physical medium. This layer provides mechanisms for framing, media access control (MAC), and error detection to ensure reliable data transmission. Here's an explanation of each function:
Framing:
Framing involves dividing the stream of data bits received from the Physical Layer into discrete frames for transmission. Frames are structured units of data that include a header, data payload, and trailer.
Header: Contains control information such as source and destination MAC addresses, frame type, and frame length.
Data Payload: Carries the actual data to be transmitted.
Trailer: Includes error detection codes (e.g., CRC - Cyclic Redundancy Check) to detect transmission errors.
Framing allows network devices to identify the beginning and end of each frame, facilitating accurate data transmission and reception.
Media Access Control (MAC):
Media Access Control (MAC) is a sublayer of the Data Link Layer responsible for controlling access to the physical transmission medium. It manages the transmission of data frames between devices connected to the same local network segment.
MAC protocols determine how devices on a shared medium (e.g., Ethernet LAN) can access the medium to transmit data without causing collisions.
Examples of MAC protocols include CSMA/CD (Carrier Sense Multiple Access with Collision Detection) used in Ethernet LANs, where devices listen for a clear channel before transmitting and detect collisions if they occur.
Error Detection:
Error detection mechanisms in the Data Link Layer help detect errors that may occur during data transmission over the physical medium.
Cyclic Redundancy Check (CRC) is a commonly used error detection technique that calculates a checksum based on the contents of the frame. The sender includes the checksum in the frame's trailer, and the receiver recalculates the checksum upon receiving the frame. If the recalculated checksum does not match the received checksum, it indicates that the frame has been corrupted during transmission.
Error detection allows network devices to identify and discard corrupted frames, reducing the likelihood of delivering erroneous data to higher layers in the protocol stack.
These functions are essential for managing communication within the same local network segment and detecting and correcting errors that may occur during transmission over the physical medium.
Conversion of Data Frame into Bits in the Physical Layer
In the OSI model, the Physical Layer (Layer 1) is responsible for transmitting raw data bits over the physical medium, such as copper wires, fiber optic cables, or wireless transmissions. The process of converting a data frame into bits at the Physical Layer involves several steps:
Encoding:
Encoding is the process of converting digital data into electrical signals suitable for transmission over the physical medium.
Digital data consists of binary bits (0s and 1s), but these bits need to be translated into physical signals that can be sent over the transmission medium.
Different encoding schemes are used depending on the physical medium and communication standards. For example:
In Ethernet LANs, Manchester encoding or variants like differential Manchester encoding are commonly used.
In fiber optic communications, encoding schemes such as 8B/10B encoding may be used.
Modulation:
Modulation is the process of imposing the digital signal onto an analog carrier wave.
The digital signal (consisting of 0s and 1s) is modulated onto a carrier wave with specific frequency, amplitude, and phase characteristics.
Modulation techniques vary depending on the physical medium. For example:
Amplitude Modulation (AM)
Frequency Modulation (FM)
Phase Modulation (PM)
Different modulation schemes are used for different transmission media, such as copper wires, fiber optics, or wireless channels.
Transmission:
Once the digital data has been encoded and modulated, it is transmitted over the physical medium.
The electrical signals (for wired communication) or electromagnetic waves (for wireless communication) carry the encoded data bits.
The physical characteristics of the transmission medium, such as attenuation, noise, and signal interference, affect the quality and reliability of data transmission.
Reception:
At the receiving end, the transmitted signals are received by the physical layer interface.
The received signals are demodulated to recover the original digital data.
Demodulation reverses the modulation process, extracting the digital signal from the analog carrier wave.
The received digital signal is then decoded to reconstruct the original data frame.
The conversion of a data frame into bits at the Physical Layer involves encoding the digital data into electrical signals, modulating the signals onto a carrier wave, transmitting them over the physical medium, receiving the signals at the destination, demodulating them to recover the digital data, and finally decoding the data frame.
References :
TCP vs UDP: What’s the Difference and Which Protocol Is Better? : https://www.avast.com/c-tcp-vs-udp-difference#:~:text=The main difference between TCP,reliable but works more quickly
OSI(Network Security) : https://www.perimeter81.com/glossary/osi-model#:~:text=First%20developed%20in%201978%20by,ISO%2FIEC%207498%E2%80%931.
OSI Model Explained | OSI Animation | Open System Interconnection Model | OSI 7 layers | TechTerms :
Physical Layer In OSI Model | Functions of Physical Layer | Computer Network Basics | Simplilearn :