Ethernet which layer protocol




















A unicast MAC address is the unique address used when a frame is sent from a single transmitting device to single destination device. In the example shown in the figure, a host with IP address For a unicast packet to be sent and received, a destination IP address must be in the IP packet header. A corresponding destination MAC address must also be present in the Ethernet frame header.

With a broadcast, the packet contains a destination IP address that has all ones 1s in the host portion. This numbering in the address means that all hosts on that local network broadcast domain will receive and process the packet.

As shown in the figure, a broadcast IP address for a network needs a corresponding broadcast MAC address in the Ethernet frame. Recall that multicast addresses allow a source device to send a packet to a group of devices. Devices that belong to a multicast group are assigned a multicast group IP address. The range of multicast addresses is from Because multicast addresses represent a group of addresses sometimes called a host group , they can only be used as the destination of a packet.

The source will always have a unicast address. Examples of where multicast addresses would be used are in remote gaming, where many players are connected remotely but playing the same game, and distance learning through video conferencing, where many students are connected to the same class. As with the unicast and broadcast addresses, the multicast IP address requires a corresponding multicast MAC address to actually deliver frames on a local network. The multicast MAC address is a special value that begins with E in hexadecimal.

The value ends by converting the lower 23 bits of the IP multicast group address into the remaining 6 hexadecimal characters of the Ethernet address.

The remaining bit in the MAC address is always a "0". An example, as shown in the graphic, is hexadecimal EA. Each hexadecimal character is 4 binary bits. Media Access Control in Ethernet In a shared media environment, all devices have guaranteed access to the medium, but they have no prioritized claim on it. If more than one device transmits simultaneously, the physical signals collide and the network must recover in order for communication to continue.

Collisions are the cost that Ethernet pays to get the low overhead associated with each transmission. Because all computers using Ethernet send their messages on the same media, a distributed coordination scheme CSMA is used to detect the electrical activity on the cable.

A device can then determine when it can transmit. When a device detects that no other computer is sending a frame, or carrier signal, the device will transmit, if it has something to send.

If a device detects a signal from another device, it will wait for a specified amount of time before attempting to transmit. When there is no traffic detected, a device will transmit its message. While this transmission is occurring, the device continues to listen for traffic or collisions on the LAN.

After the message is sent, the device returns to its default listening mode. If the distance between devices is such that the latency of one device's signals means that signals are not detected by a second device, the second device may start to transmit, too. The media now has two devices transmitting their signals at the same time. Their messages will propagate across the media until they encounter each other. At that point, the signals mix and the message is destroyed. Although the messages are corrupted, the jumble of remaining signals continues to propagate across the media.

When a device is in listening mode, it can detect when a collision occurs on the shared media. The detection of a collision is made possible because all devices can detect an increase in the amplitude of the signal above the normal level.

Once a collision occurs, the other devices in listening mode - as well as all the transmitting devices - will detect the increase in the signal amplitude.

Once detected, every device transmitting will continue to transmit to ensure that all devices on the network detect the collision. Jam Signal and Random Backoff. Once the collision is detected by the transmitting devices, they send out a jamming signal. This jamming signal is used to notify the other devices of a collision, so that they will invoke a backoff algorithm.

This backoff algorithm causes all devices to stop transmitting for a random amount of time, which allows the collision signals to subside. After the delay has expired on a device, the device goes back into the "listening before transmit" mode.

A random backoff period ensures that the devices that were involved in the collision do not try to send their traffic again at the same time, which would cause the whole process to repeat. But, this also means that a third device may transmit before either of the two involved in the original collision have a chance to re-transmit.

Because of the rapid growth of the Internet:. More devices are being connected to the network. Recall that hubs were created as intermediary network devices that enable more nodes to connect to the shared media.

Also known as multi-port repeaters, hubs retransmit received data signals to all connected devices, except the one from which it received the signals. Hubs do not perform network functions such as directing data based on addresses.

Hubs and repeaters are intermediary devices that extend the distance that Ethernet cables can reach. Because hubs operate at the Physical layer, dealing only with the signals on the media, collisions can occur between the devices they connect and within the hubs themselves. Further, using hubs to provide network access to more users reduces the performance for each user because the fixed capacity of the media has to be shared between more and more devices. The connected devices that access a common media via a hub or series of directly connected hubs make up what is known as a collision domain.

A collision domain is also referred to as a network segment. Hubs and repeaters therefore have the effect of increasing the size of the collision domain. As shown in the figure, the interconnection of hubs form a physical topology called an extended star.

The extended star can create a greatly expanded collision domain. An increased number of collisions reduces the network's efficiency and effectiveness until the collisions become a nuisance to the user.

Therefore, other mechanisms are required when large numbers of users require access and when more active network access is needed. We will see that using switches in place of hubs can begin to alleviate this problem. Ethernet Timing Faster Physical layer implementations of Ethernet introduce complexities to the management of collisions. As discussed, each device that wants to transmit must first "listen" to the media to check for traffic.

If no traffic exists, the station will begin to transmit immediately. The electrical signal that is transmitted takes a certain amount of time latency to propagate travel down the cable.

Each hub or repeater in the signal's path adds latency as it forwards the bits from one port to the next. This accumulated delay increases the likelihood that collisions will occur because a listening node may transition into transmitting signals while the hub or repeater is processing the message.

Because the signal had not reached this node while it was listening, it thought that the media was available. This condition often results in collisions. In half-duplex mode, if a collision has not occurred, the sending device will transmit 64 bits of timing synchronization information, which is known as the Preamble.

The sending device will then transmit the complete frame. Ethernet with throughput speeds of 10 Mbps and slower are asynchronous. An asynchronous communication in this context means that each receiving device will use the 8 bytes of timing information to synchronize the receive circuit to the incoming data and then discard the 8 bytes.

Ethernet implementations with throughput of Mbps and higher are synchronous. Synchronous communication in this context means that the timing information is not required. For each different media speed, a period of time is required for a bit to be placed and sensed on the media. This period of time is referred to as the bit time. At Mbps, that same bit requires 10 nS to transmit. And at Mbps, it only takes 1 nS to transmit a bit.

As a rough estimate, At Mbps, the device timing is barely able to accommodate meter cables. At Mbps, special adjustments are required because nearly an entire minimum-sized frame would be transmitted before the first bit reached the end of the first meters of UTP cable.

For this reason, half-duplex mode is not permitted in Gigabit Ethernet. These timing considerations have to be applied to the interframe spacing and backoff times both of which are discussed in the next section to ensure that when a device transmits its next frame, the risk of a collision is minimized. In half-duplex Ethernet, where data can only travel in one direction at once, slot time becomes an important parameter in determining how many devices can share a network.

For all speeds of Ethernet transmission at or below Mbps, the standard describes how an individual transmission may be no smaller than the slot time. Determining slot time is a trade-off between the need to reduce the impact of collision recovery backoff and retransmission times and the need for network distances to be large enough to accommodate reasonable network sizes. The compromise was to choose a maximum network diameter about meters and then to set the minimum frame length long enough to ensure detection of all worst-case collisions.

Slot time for and Mbps Ethernet is bit times, or 64 octets. Slot time for Mbps Ethernet is bit times, or octets. The slot time ensures that if a collision is going to occur, it will be detected within the first bits for Gigabit Ethernet of the frame transmission. This simplifies the handling of frame retransmissions following a collision. Slot time is an important parameter for the following reasons:. The bit slot time establishes the minimum size of an Ethernet frame as 64 bytes.

Any frame less than 64 bytes in length is considered a "collision fragment" or "runt frame" and is automatically discarded by receiving stations. Slot time is calculated assuming maximum cable lengths on the largest legal network architecture. All hardware propagation delay times are at the legal maximum and the bit jam signal is used when collisions are detected. The actual calculated slot time is just longer than the theoretical amount of time required to travel between the furthest points of the collision domain, collide with another transmission at the last possible instant, and then have the collision fragments return to the sending station and be detected.

See the figure. For the system to work properly, the first device must learn about the collision before it finishes sending the smallest legal frame size. To allow Mbps Ethernet to operate in half-duplex mode, the extension field was added to the frame when sending small frames purely to keep the transmitter busy long enough for a collision fragment to return.

This field is present only on Mbps, half-duplex links and allows minimum-sized frames to be long enough to meet slot time requirements. Extension bits are discarded by the receiving device. The Ethernet standards require a minimum spacing between two non-colliding frames.

This gives the media time to stabilize after the transmission of the previous frame and time for the devices to process the frame. Referred to as the interframe spacing , this time is measured from the last bit of the FCS field of one frame to the first bit of the Preamble of the next frame. After a frame has been sent, all devices on a 10 Mbps Ethernet network are required to wait a minimum of 96 bit times 9.

On faster versions of Ethernet, the spacing remains the same - 96 bit times - but the interframe spacing time period grows correspondingly shorter.

Synchronization delays between devices may result in the loss of some of frame preamble bits. This in turn may cause minor reduction of the interframe spacing when hubs and repeaters regenerate the full 64 bits of timing information the Preamble and SFD at the start of every frame forwarded.

On higher speed Ethernet some time sensitive devices could potentially fail to recognize individual frames resulting in communication failure. As you will recall, Ethernet allows all devices to compete for transmitting time. But remember, when a larger number of devices are added to the network, it is possible for the collisions to become increasingly difficult to resolve. As soon as a collision is detected, the sending devices transmit a bit "jam" signal that will enforce the collision.

This ensures all devices in the LAN to detect the collision. It is important that the jam signal not be detected as a valid frame; otherwise the collision would not be identified. The most commonly observed data pattern for a jam signal is simply a repeating 1, 0, 1, 0 pattern, the same as the Preamble. The corrupted, partially transmitted messages are often referred to as collision fragments or runts.

Normal collisions are less than 64 octets in length and therefore fail both the minimum length and the FCS tests, making them easy to identify. After a collision occurs and all devices allow the cable to become idle each waits the full interframe spacing , the devices whose transmissions collided must wait an additional - and potentially progressively longer - period of time before attempting to retransmit the collided frame.

The waiting period is intentionally designed to be random so that two stations do not delay for the same amount of time before retransmitting, which would result in more collisions. This is accomplished in part by expanding the interval from which the random retransmission time is selected on each retransmission attempt. The waiting period is measured in increments of the parameter slot time. If media congestion results in the MAC layer unable to send the frame after 16 attempts, it gives up and generates an error to the Network layer.

Such an occurrence is rare in a properly operating network and would happen only under extremely heavy network loads or when a physical problem exists on the network. The methods described in this section allowed Ethernet to provide greater service in a shared media topology based on the use of hubs.

Ethernet is covered by the IEEE Four data rates are currently defined for operation over optical fiber and twisted-pair cables:. While there are many different implementations of Ethernet at these various data rates, only the more common ones will be presented here.

The figure shows some of the Ethernet PHY characteristics. The portion of Ethernet that operates on the Physical layer will be discussed in this section, beginning with 10Base-T and continuing to 10 Gbps varieties.

These implementations are no longer used and are not supported by the newer However, Cat5 or later cabling is typically used today. The pair connected to pins 1 and 2 are used for transmitting and the pair connected to pins 3 and 6 are used for receiving. The replacement of hubs with switches in 10BASE-T networks has greatly increased the throughput available to these networks and has given Legacy Ethernet greater longevity. In the mid to late s, several new These standards used different encoding requirements for achieving these higher data rates.

The most popular implementations of Mbps Ethernet are:. Because the higher frequency signals used in Fast Ethernet are more susceptible to noise, two separate encoding steps are used by Mbps Ethernet to enhance signal integrity. The figure shows an example of a physical star topology. Although the encoding, decoding, and clock recovery procedures are the same for both media, the signal transmission is different - electrical pulses in copper and light pulses in optical fiber.

Fiber implementations are point-to-point connections, that is, they are used to interconnect two devices. These connections may be between two computers, between a computer and a switch, or between two switches. The development of Gigabit Ethernet standards resulted in specifications for UTP copper, single-mode fiber, and multimode fiber. On Gigabit Ethernet networks, bits occur in a fraction of the time that they take on Mbps networks and 10 Mbps networks.

With signals occurring in less time, the bits become more susceptible to noise, and therefore timing is critical. Network Layer 2 protocols Depending upon their requirements, certain protocols are chosen over others. Here is a list of commonly used Layer 2 protocols: LLDP Link layer discovery protocol LLDP is vendor neutral, and is commonly used as a component in network management and network monitoring applications.

IP route This command contains information from the IP routing table that can be used to forward a packet through the best path towards its destination. This protocol is preferred for discovering switches. ARP translates bit addresses to bit and vice versa, and is preferred by IPv4 devices. Multi-link trunking Protocol MLT MLT provides high-speed, fault tolerant connection between servers, switches and routers by grouping all ethernet links into a single logical ethernet link.

CAN Controller area network CAN facilitates communication between the applications of microcontrollers and their devices without relying on a host computer. Video Zone. IT Admin from "Royal flying doctor service", Australia.

Jonathan ManageEngine Customer. EtherType Data Destination Address. The address es are specified for a unicast, multicast subgroup , or broadcast an entire group. This part gets inserted into the data field. The destination network layer, protocol type of the packet. The source network layer, protocol type of the packet. Defines which upper layer protocol will utilize the Ethernet frame. Address Mapping.

The Ethernet protocol allows for bus, star, or tree topologies, depending on the type of cables used and other factors. This heavy coaxial cabling was expensive to purchase, install, and maintain, and very difficult to retrofit into existing facilities.

The current standards are now built around the use of twisted pair wire. Fiber cable can also be used at this level in 10BaseFL. The Fast Ethernet protocol supports transmission up to Mbps.

In addition, category 5 twisted pair or fiber optic cable is necessary. Fast Ethernet standards include:. Gigabit Ethernet standard is a protocol that has a transmission speed of 1 Gbps Mbps. It can be used with both fiber optic cabling and copper. The Ethernet standards continue to evolve. Several very popular network protocols, commonly used in the 90's and early 21st century have now largely fallen into disuse.

While you may hear terms from time to time, such as "Localtalk" Apple or "Token Ring" IBM , you will rarely find these systems still in operation. Although they played an important role in the evolution of networking, their performance and capacity limitations have relegated them to the past, in the wake of the standardization of Ethernet driven by the success of the Internet.

The network layer is in charge of routing network messages data from one computer to another. Every network device such as network interface cards and printers have a physical address called a MAC Media Access Control address.



0コメント

  • 1000 / 1000