Objective: Study of hardware and software requirements of differenet operating systems (Windows 10, UNIX, Linux, Windows XP, and Windows 7/8).
Theory:
An operating system (OS) is software that manages and handles the hardware and software resources of a system. It provides intrection between users of computers and computer hardware.
An operating system is responsible for managing and controlling all the activities and sharing of computer resources.
An operating system is a low-level software that includes all the basic functions like processor management, memory management, error detection, etc.
Abstract view of a computer system.
Windows 10
Hardware requirements
Processor: 1 GHz or faster processor
RAM: 1 GB for 32-bit or 2 GB for 64-bit
Storage space: 16 GB for 32-bit OS or 20 GB for 64-bit OS.
Display: 800 x 6009 with WDDM driver
Software requirements
Graphics Card: Direct X9 or later with WDDM 1.0 driver
UNIX
Hardware requirements
Processor: Minimum of 1 GHz processor
RAM: Minimum of 1 GB RAM
Storage space: Minimum of 10 GB free disk space
Software requirements
UNIX-compatible operating system, such as Sun Solaris, IBM AIX, HP-UX, etc.
Compiler and development tools
X Window System for graphical user interface
Networking tools for network communication
Linux
Hardware Requirements:
Processor: Minimum of 1 GHz processor
RAM: Minimum of 1 GB RAM (2 GB or more recommended for better performance)
Storage space: Minimum of 10 GB free disk space (20 GB or more recommended for better performance)
Software Requirements:
Linux distribution, such as Ubuntu, Fedora, CentOS, Debian, etc.
Graphical user interface
Compiler and development tools
Networking tools for network communication
Windows XP
Hardware Requirements:
Processor: Minimum of Pentium 233 MHz processor (300 MHz or higher)
RAM: Minimum of 64 MB RAM (128 MB or higher)
Storage space: Minimum of 1.5 GB free disk space
Software Requirements:
Windows XP operating system
DirectX 9 graphics device with WDDM driver (optional for graphical user interface)
Networking tools for network communication (optional)
Windows 7/8
Hardware Requirements:
Processor: Minimum of 1 GHz processor (1 GHz or higher)
RAM: Minimum of 1 GB RAM (2 GB or higher)
Storage space: Minimum of 16 GB free disk space (20 GB or higher)
Software Requirements:
Windows 7 or Windows 8 operating system
DirectX 9 graphics device with WDDM 1.0 or higher driver (optional for graphical user interface)
Networking tools for network communication
Experiment 2
Objective: Execute various system calls.
Process Management
File Management
Input/Output System Calls
Process Management: Process management uses certain system calls. They are explained below.
fork(): system call is used to create a new process.
exec(): system call is used to run a new program.
wait(): system call is used to make the process to wait.
exit(): system call is used to terminate the process.
getpid(): system call is used to find the unique process id.
getppid(): system call is used to find the parent process id.
nice(): system call is used to bias the currently running process property.
Example program for example of fork()
OUTPUT
File Management: There are four system calls for file management,
open(): system call is used to know the file descriptor of user-created files. Since read and write use file descriptor as their 1st parameter so to know the file descriptor open()system call is used.
read(): system call is used to read the content from the file. It can also be used to read the input from the keyboard by specifying the 0 as file descriptor.
write(): system call is used to write the content to the file.
close(): system call is used to close the opened file, it tells the operating system that you are done with the file and close the file.
Example program for file management
OUTPUT
Input/Output System Calls: Basically there are total 5 types of I/O system calls:
create(): Used to Create a new empty file.
open(): Used to Open the file for reading, writing or both.
close(): Tells the operating system you are done with a file descriptor and Close the filewhich pointed by fd.
read(): From the file indicated by the file descriptor fd, the read() function reads bytesof input into the memory area indicated by buf.
write(): Writes bytes from buf to the file or socket associated with fd.
Example program for Input/output System Calls
OUTPUT
Experiment 3
Objective: Implement CPU Scheduling Policies:
SJF
FCFS
Theory:
CPU scheduling is a core operating system function that manages the allocation of CPU resources to processes.
Its primary goal is to maximize system throughput, minimize response time, and ensure fairness among processes.
Scheduling Criteria:
CPU Utilization: Maximizing CPU utilization ensures that the CPU is kept busy executing processes as much as possible.
Throughput: Throughput measures the number of processes completed per unit of time, indicating system efficiency.
Turnaround Time: Turnaround time is the total time taken to execute a process, including waiting time and execution time.
Waiting Time: Waiting time is the total time a process spends waiting in the ready queue before getting CPU time.
Response Time: Response time is the time taken from submitting a request until the first response is produced.
Sortest Job First (SJF): SJF scheduling selects the processs with the smallest execution time next. This scheduling algorithm can be preemitive or non-premptive.
First Come First Search (FCFS): FCFS scheduling selects the process that arrives first. It's a non-preemitive scheduling algorithm.
Experiment 4
Objective: Implement CPU Scheduling Policies:
Priority Scheduling
Round Robin
Theory:
Preemptive Scheduling: Preemptive scheduling is used when a process switches from the running state to the ready state or from the waiting state to the ready state. The resources are allocated to the process for a limited amount of time and then taken away, and the process is again placed back in the ready queue if that process still has CPU burst time remaining.
Round Robin: Round Robin is a CPU scheduling algorithm where each process is cyclically assigned a fixed time slot. It is the preemptive version of the First come First Serve CPU Scheduling algorithm.
Algorithm
Step 1: Start the process.
Step 2: Accept the number of processes in the ready Queue and time quantum.
Step 3: For each process in the ready Queue, assign the process and accept the CPU burst time.
Step 4: Each process runs for a predefined time quantum.
Step 5: If a process finishes execution within the time quantum, it is removed from the queue. If a process does not finish within the time quantum, it is pre-empted, and put at the end of the queue to await further execution.
Step 6: When a process is preempted, the CPU switches to the next process in the queue.
Step 7: Continue this process of executing each process for its time quantum until all processes have completed their execution.
Step 8: Stop.
Priority Scheduling Algorithm: Preemptive Priority CPU Scheduling Algorithm is a pre-emptive method of CPU scheduling algorithm that works based on the priority of a process. In this algorithm, the scheduler schedules the tasks to work as per the priority, which means that a higher priority process should be executed first.
Algorithm
Step-1: Select the first process whose arrival time will be 0, we need to select that process because that process is only executing at time t=0.
Step-2: Check the priority of the next available process. Here we need to check for 3 conditions.
if priority(current process) > priority(prior process) : then execute the current process.
if priority(current process) < priority(prior process) : then execute the prior process.
if priority(current process) = priority(prior process) : then execute the process which arrives first i.e., arrival time should be first.
Step-3: Repeat Step-2 until it reaches the final process.
Step-4: When it reaches the final process, choose the process which is having the highest priority & execute it. Repeat the same step until all processes complete their execution.
Experiment 5
Objective: Implementation of Banker's Algorithm.
Theory:
Banker's Algorithm: The Banker's algorithm is a resource allocation and deadlock avoidance algorithm that tests for safety by simulating the allocation for predetermined maximum possible amounts of all resources, then makes an safe state check to test for possible activities, before deciding whether allocation should be allowed to continue.
Following Data structures are used to implement the Banker's Algorithm: Let n be the number of processes in the system and m be the number of resources types.
Available:
It is a 1-d array of size m indicating the number of available resources of each type.
Available[j] = k means there are k instances of resource type Rj
Max:
It is a 2-d array of size n*m that defines the maximum demand of each process in a system.
Max[i][j] = k means process Pi may request at most k instances of resource type Rj.
Allocation:
It is a 2-d array of size n*m that defines the number of resources of each type currently allocated to each process.
Allocation[i][j] = k means process Pi is currently allocated k instances of resource type Rj.
Need:
It is a 2-d array of size n*m that indicates the remaining resource need of each process.
Need[i][j] = k means process Pi currently need k instances of resource type Rj for its execution.
Need[i][j] = Max[i][j] - Allocation[i][j]
Allocation specifies the resources currently allocated to process Pi and Needi specifies the additional resources that process Pi may still request to complete its task. Banker's algorithm consists of Safety algorithm and Resource request algorithm.
Safety Algorithm:
Step 1: Let Work and Finish be vectors of length m and n respectively.
Step 2: Find an i such that both Finish[i] = falseNeed[i]<= Work if no such i exists goto step 4
Step 3:Work = Work + Allocation[i]Finish[ i ] = true Go to Step 2.
Step 4:if Finish [ i ] = true for all i then the system is in a safe state
Resource-Request Algorithm: Let Request[i] be the request array for process Pi. Request[i][j] = k means process Pi wants k instances of resource type Rj. When a request for resources is made by process Pi, the following actions are taken:
Step 1:If Request[i]<= Need[i] Goto Step 2. Otherwise, raise an error condition, since the process has exceeded its maximum claim.
Step 2:If Request[i]<= Available Goto Step 3; otherwise, Pi must wait, since the resources are not available.
Step 3: Have the system pretend to have allocated the requested resources to process Pi by modifying the state as follows:
Experiment 6
Objective: Implement of Resource Allocation Graph (RAG)
Theory:
Resource Allocation Graph (RAG) is a fundamental concept in operating systems and concurrent programming. It is used to represent the allocation of resources to processes and to detect and prevent deadlocks. It is a directed graph used to model the allocation of resources to processes in a system. It helps in visualizing the relationships between processes and resources and can be used to detect and prevent deadlocks.
One of the advantages of having a diagram is, sometimes it is possible to see a deadlock directly by using RAG, but then you might not be able to know that by looking at the table. But the tables are better if the system contains lots of process and resource and Graph is better if the system contains less number of process and resource. We know that any graph contains vertices and edges.
Types of Vertices in RAG: RAG also contains vertices and edges. In RAG vertices are two types and edges are also two types in RAG:
Process Vertex: Every process will be represented as a process vertex. Generally, the process will be represented with a circle.
Resource Vertex: Every resource will be represented as a resource vertex.
Assign Edge: If you already assign a resource to a process then it is called Assign edge.
Request Edge: It means in future the process might want some resource to complete the execution, that is called request edge.
Algorithm for check the deadlock:
Step 1: First, find the currently available instances of each resource.
Step 2: Check for each process which can be executed using the allocated + available resource.
Step 3: Add the allocated resource of the executable process to the available resources and terminate it.
Step 4: Repeat the 2nd and 3rd steps until the execution of each process.
Step 5: If at any step, none of the processes can be executed then there is a deadlock in the system.
Experiment 7
Objective: Implement Page Replacement Algorithm:
FIFO
LRU
Theory:
Page replacement is basic to demand paging. It completes the separation between logical memory and physical memory. With this mechanism, an enormous virtual memory can be provided for programmers on a smaller physical memory. There are many different page-replacement algorithms. Every operating system probably has its own replacement scheme.
FIFO: The FIFO algorithm is used in the paging method for memory management in an operating system that decides which existing page needs to be replaced in the queue. FIFO algorithm replaces the oldest (First) page which has been present for the longest time in the main memory. In simple words, when a new page comes in from secondary memory to main memory, It selects the front of the queue which is the oldest page present, and removes it.
Algorithm:
Step 1: Start the program.
Step 2: Read the number of frames.
Step 3: Read the number of pages.
Step 4: Read the page numbers.
Step 5: Initialize the values in frames to -1.
Step 6: Allocate the pages in to frames in First in first out order.
Step 7: Display the number of page faults.
Step 8: Stop the program.
LRU: LRU stands for Least Recently Used. As the name suggests, this algorithm is based on the strategy that whenever a page fault occurs, the least recently used page will be replaced with a new page. So, the page not utilized for the longest time in memory (compared to all other pages) gets replaced. This strategy is known as LRU paging.
Algorithm:
Step 1: Start the program.
Step 2: Read the number of frames.
Step 3: Read the number of pages.
Step 4: Read the page numbers.
Step 5: Initialize the values in frames to -1.
Step 6: Allocate the pages in to frames by selecting the page that has not been used for the longest period of time.
Step 7: Display the number of page faults.
Step 8: Stop the program.
Experiment 8
Objective: Write C programs to simulate Disk Scheduling Algorithms using:
FCFS (First Come First Serve)
SSTF (Shortest Seek Time Fit)
Theory:
Disk scheduling is a technique operating system use to manage the order in which disk I/O (input/output) requests are processed. Disk scheduling is also known as I/O Scheduling. Disk scheduling algorithms are crucial in managing how data is read from and written to a computer's hard disk.
Common disk scheduling methods include First-Come, First-Served (FCFS), Shortest Seek Time First (SSTF), SCAN, C-SCAN, LOOK, and C-LOOK.
FCFS: FCFS (First Come First Serve) is the simplest disk scheduling algorithm. This algorithm entertains requests in the order they arrive in the disk queue. The algorithm looks very fair and there is no starvation (all requests are serviced sequentially) but generally, it does not provide the fastest service.
Algorithm
Step 1: Let Request array represents an array storing indexes of tracks that have been requested in ascending order of their time of arrival. head is the position of disk head.
Step 2: Let us one by one take the tracks in default order and calculate the absolute distance of the track from the head.
Step 3: Increment the total seek count with this distance.
Step 4: Currently serviced track position now becomes the new head position.
Step 5: Go to Step 2 until all tracks in request array have not been serviced.
SSTF: SSTF (Shortest Seek Time Fist) is an abbreviation that selects the request which is closest to the current head position before moving the head away to service other requests. This is done by selecting the request which has the least seek time from the current head position.
Algorithm
Step 1: Let the Request array represents an array storing indexes of tracks that have been requested. head is the position of the disk head.
Step 2: Find the positive distance of all tracks in the request array from the head.
Step 3: Find a track from the requested array which has not been accessed/serviced yet and has a minimum distance from the head.
Step 4: Increment the total seek count with this distance.
Step 5: Currently serviced track position now becomes the new head position.
Step 6: Go to step 2 until all tracks in the request array have not been serviced.
Experiment 9
Objective: Implement the solution for Bounded Buffer (producer-consumer) problem.
Theory:
The producer-consumer problem is an example of a multi-process synchronization problem. The problem describes two processes, the producer and the consumer that share a common fixed-size buffer and use it as a queue. The producer's job is to generate data, put it into the buffer, and start again. At the same time, the consumer is consuming the data (i.e., removing it from the buffer), one piece at a time.
What is the Actual Problem?
Given the common fixed-size buffer, the task is to make sure that the producer can't add data into the buffer when it is full and the consumer can't remove data from an empty buffer. Accessing memory buffers should not be allowed to producers and consumers at the same time.
📝Note: An inadequate solution could result in a deadlock where both processes are waiting to be awakened.
Approach: The idea is to use the concept of parallel programming and Critical Section to implement the Producer-Consumer problem in C language using OpenMP.
Experiment 10
Objective: Implementation of contiguous allocation techniques:
Worst-Fit
Best-Fit
First-Fit
Theory:
Contiguous memory allocation is a memory allocation strategy. As the name implies, we utilize
this technique to assign contiguous blocks of memory to each task. Thus, whenever a process
asks to access the main memory, we allocate a continuous segment from the empty region to the
process based on its size.
First- Fit: The first-fit algorithm searches for the first free partition that is large enough to accommodate the process. The operating system starts searching from the beginning of the memory and allocates the first free partition that is large enough to fit the process.
Best- Fit: The best-fit algorithm searches for the smallest free partition that is large enough to accommodate the process. The operating system searches the entire memory and selects the free partition that is closest in size to the process.
Worst- Fit: The worst-fit algorithm searches for the largest free partition and allocates the process to it. This algorithm is designed to leave the largest possible free partition for future use.