If you are preparing for a Operating System interview, then you have reached the right place.
Computer Science Engineering is a broad field of study that deals with the Operating System.It is a fast-growing field that has many opportunities for career growth. A Operating System interview is a type of interview that is designed to assess a candidate's knowledge of Computer Science Engineering . The purpose of the interview is to evaluate the candidate's knowledge and deep understanding of subject.
The interview may also assess the candidate's communication skills, such as the ability to present complex information in a clear and concise manner.
The Interview is typically conducted by a hiring manager or recruiter who has experience in the field. The interviewer will typically ask a series of questions about the candidate's background and experience. The interviewer will also ask about the candidate's strengths and weaknesses.
This list of interview questions in Operating System includes basic-level, advanced-level, and program-based interview questions.
Here are the commonly asked question list of Operating System (Computer Science Engineering) interview questions and answers that you must prepare for fresher as well as experienced candidates to get your dream job.
The machine Purpose Workstation individual usability &Resources utilization Mainframe Optimize utilization of hardware PC Support complex games, business application Hand held PCs Easy interface & min. power consumption
The different operating systems are
An operating system is a program that acts as an intermediary between the user and the computer hardware. The purpose of an OS is to provide a convenient environment in which user can execute programs in a convenient and efficient manner.It is a resource allocator responsible for allocating system resources and a control program which controls the operation of the computer hardware.
An Operating system is an essential part in any computer system. There is a huge demand for OS developers in the IT industry.
The main function of operating sytem are
Virtual memory is a memory management technique for letting processes execute outside of memory. This is very useful, especially if an executing program cannot fit in the physical memory.
Virtual memory is a volatile memory(temporary)created on a storage drive. Usually, when we open many applications or work on those applications, the RAM is required to execute these applications. But what if RAM is low still you manage to work on those applications. How? In that case, the operating system takes part of the secondary storage device(hard disk), which will now act as RAM until the processes are running.
Deadlock is a blockage situation where each process is waiting for another resource that is acquired by some other process and is holding a resource already. In short, they are waiting for another process to get finished and release the resource.
Consider an example when two cars are coming toward each other on the same track, and there is only one track. None of them can move once they are in front of each other. These two cars will be in a deadlock situation.
A thread is a lightweight process having some of the properties of processes. A single process can have different threads. Threads introduce parallelism and improve the application through it. For example, multiple tabs can be different threads in a browser. MS word uses multiple threads. One thread is for formatting the text. Another thread is to process inputs, etc.
Systems which have more than one processor are called multiprocessor system. These systems are also known as parallel systems or tightly coupled systems. The Multiprocessor systems have the following advantages.
In batched operating system the users gives their jobs to the operator who sorts the programs according to their requirements and executes them. This is time consuming but makes the CPU busy all the time.
A multi-programmed operating systems can execute a number of programs concurrently. The operating system fetches a group of programs from the job-pool in the secondary storage which contains all the programs to be executed, and places them in the main memory. This process is called job scheduling. Then it chooses a program from the ready queue and gives them to CPU to execute. When a executing program needs some I/O operation then the operating system fetches another program and hands it to the CPU for execution, thus keeping the CPU busy all the time.
Real-time system is used in the case when rigid-time requirements have been placed on the operation of a processor. It contains a well defined and fixed time constraints.
Kernel is the core and most important part of a computer operating system which provides basic services for all parts of the OS.
A monolithic kernel is a kernel which includes all operating system code is in single executable image.
An executing program is known as process. There are two types of processes:
A list of different states of process:
A Multiprocessor system is a type of system that includes two or more CPUs. It involves the processing of different computer programs at the same time mostly by a computer system with two or more CPUs that are sharing single memory.
Benefits:
They contain a no. of processors to increase the speed of execution, and reliability, and economy.
They are of two types:
Main memory is also called random access memory (RAM). CPU can access Main memory directly. Data access from main memory is much faster than Secondary memory. It is implemented in a semiconductor technology, called dynamic random-access memory (DRAM).
Main memory is usually too small to store all needed programs. It is a volatile storage device that loses its contents when power is turned off. Secondary memory can stores large amount of data and programs permanently. Magnetic disk is the most common secondary storage device. If a user wants to execute any program it should come from secondary memory to main memory because CPU can access main memory directly.
The main functions of a Kernel are:
A real time operating system is used when rigid time requirement have been placed on the operation of a processor or the flow of the data; thus, it is often used as a control device in a dedicated application. Here the sensors bring data to the computer. The computer must analyze the data and possibly adjust controls to modify the sensor input.
They are of two types:
It is a logical extension of the multi-programmed OS where user can interact with the program. The CPU executes multiple jobs by switching among them, but the switches occur so frequently that the user feels as if the operating system is running only his program.
A situation, where several processes access and manipulate the same dataconcurrently and the outcome of the execution depends on the particular order in which the access takes place, is called race condition. To guard against the racecondition we need to ensure that only one process at a time can be manipulating thesame data. The technique we use for this is called process synchronization.
It is a phenomenon in virtual memory schemes when the processor spends most of itstime swapping pages, rather than executing instructions. This is due to an inordinatenumber of page faults.
When many of free blocks are too small to satisfy any request then fragmentationoccurs. External fragmentation and internal fragmentation are two types of fragmentation. External Fragmentation happens when a dynamic memory allocationalgorithm allocates some memory and a small piece is left over that cannot beeffectively used. Internal fragmentation is the space wasted inside of allocatedmemory blocks because of restriction on the allowed sizes of allocated blocks.
Cache memory is random access memory (RAM) that a computer microprocessor canaccess more quickly than it can access regular RAM. As the microprocessor processesdata, it looks first in the cache memory and if it finds the data there (from a previousreading of data), it does not have to do the more time-consuming reading of data fromlarger memory.
An interpreter reads one instruction at a time and carries out the actions implied by that instruction. It does not perform any translation. But a compiler translates theentire instructions
In a multiprocessor system there exist several caches each may containing a copy of same variable A. Then a change in one cache should immediately be reflected in all other caches this process of maintaining the same value of a data in all the caches s called cache-coherency.
Synchronization means controlling access to a resource that is available to two or more threads or process. Different synchronization mechanisms are:
Pre-emptive scheduling allows interruption of a process while it is executing and taking the CPU to another process while non-pre-emptive scheduling ensures that a process keeps the CPU under control until it has completed execution.
These are the 4 conditions:
There are 4 necessary conditions to achieve a deadlock:
A redundant Array of Independent Disks(RAID) is used to store the same data redundantly to improve the overall performance. Following are the different RAID levels:
RAID 0 – Striped Disk Array without fault tolerance.
In this data is stripped between different disks and you can access data at the It offers the best performance, but it does not provide fault tolerance.
RAID 1 – Mirroring and duplexing
This provides fault tolerance as data is stored on different disks. If one fails then data can be accessed from another drive.
RAID 3 – Bit-interleaved Parity
Raid 3 is not used much. Data is divided evenly and stored on two or more disks, plus there is a dedicated drive for parity storage.
RAID 5 – Block-interleaved distributed Parity
Data is divided evenly and stored on two or more disks, plus parity is distributed in different drives.
RAID 6 – P+Q Redundancy.
Data is divided evenly and stored on two or more disks, plus parity is distributed in two different drives.
It is the deadlock-avoidance method. It is named on the banking system where the bank allocates available cash in such a manner that it can satisfy the requirements of all of its customers by subtracting the loan amount from the total amount of money the bank has. In the same way in the operating system, when a new process is to be executed, it requires some resources.
Banker's algorithm is used to avoid deadlock. It is the one of deadlock-avoidance method. It is named as Banker's algorithm on the banking system where bank never allocates available cash in such a manner that it can no longer satisfy the requirements of all of its customers.
So banker’s algorithm needs to know how many resources the process could possibly request, how many resources are being held by processes and how many resources the system has. And accordingly, resources are being assigned if available resources are more than requested resources in order to avoid deadlock.
Logical address : A logical address is a virtual address generated by the CPU.
Physical address : A physical address is generated by the MMU(Memory management unit).
Logical address : It is the address of data or instruction used by the program.
Physical address : A physical address is a location in a memory unit mapped to the corresponding logical addresses.
Logical address : Is not visible to the user
Physical address : Is visible to the user
When a computer system has two or more CPUs, all CPUs share RAM. This system is also known as a tightly coupled or parallel system. The process is divided into different subprocesses, and other processors get different subprocesses to work.
Fragmentation is a phenomenon of memory wastage. It reduces the capacity and performance because space is used inefficiently.
There are two types of fragmentation:
The main functions of file management in Operating system are
Creating and deleting files/ directories-It maintains an optimal file structure for storing files in the memory.
Providing I/O support for multi-users-It provides devices for inputting and outputting data.
Ensuring data in the file is correct-It ensures that the data is not corrupted before storing it.
Back up and restoration of files on storage media-It takes backup of memory and can restore the file in case it is deleted.
Tracking data location-A file is not stored in a contiguous way. It can be stored in different blocks on the disk(non-contiguous way). So there is a need to track in which block which part of the file is stored.
Storing data-It decides where to store the data in memory.
With dynamic loading, a routine is not loaded until it is called. This method is especially useful when large amounts of code are needed in order to handle infrequently occurring cases such as error routines
Paging is a memory management scheme that permits the physical address space of a process to be noncontiguous. It avoids the considerable problem of having to fit varied sized memory chunks onto the backing store.
Many Operating System provide both kernel threading and user threading. They are called multithreading models. They are of three types:
The main advantages of using threads are:
The main disadvantages of using threads are:
The scheduling algorithms decide which processes in the ready queue are to be allocated to the CPU for execution. Scheduling algorithms can be broadly classified on the basis of:
Preemptive algorithms: In this type of scheduling a process maybe interrupted during execution and the CPU maybe allocated to another process.
Non-Preemptive algorithms: In this type of scheduling once a CPU has been allocated to a process it would not release the CPU till a request for termination or switching to waiting state occurs.
Round Robin Scheduling : The round robin algorithm works on the concept of time slice or also known as quantum. In this algorithm, every process is given a predefined amount of time to complete the process. In case, a process is not completed in its predefined time then it is assigned to the next process waiting in queue. In this way, a continuous execution of processes is maintained which would not have been possible in case of FCFS algorithm
First Come First Served Scheduling: First come first serve (FCFS) scheduling algorithm simply schedules the jobs according to their arrival time. The job which comes first in the ready queue will get the CPU first. The lesser the arrival time of the job, the sooner will the job get the CPU
Shortest Job First Scheduling : The shortest job first (SJF) or shortest job next, is a scheduling policy that selects the waiting process with the smallest execution time to execute next. SJN, also known as Shortest Job Next (SJN), can be preemptive or non-preemptive.
Long term schedulers are the job schedulers that select processes from the job queueand load them into memory for execution. The short term schedulers are the CPUschedulers that select a process from the ready queue and allocate the CPU to one of them
Mutual exclusion : Some resources such as read only files shouldn’t be mutually exclusive. They should be sharable. But some resources such as printers must be mutually exclusive.
Hold and wait : To avoid this condition we have to ensure that if a process is requesting for a resource it should not hold any resources.
No preemption : If a process is holding some resources and requests another resource that cannot be immediately allocated to it (that is the process must wait), then all the resources currently being held are preempted(released autonomously).
Circular wait : the way to ensure that this condition never holds is to impose a total ordering of all the resource types, and to require that each process requests resources in an increasing order of enumeration.
A system is in safe state only if there exists a safe sequence. A sequence of processes is a safe sequence for the current allocation state if, for each Pi, the resources that the Pi can still request can be satisfied by the currently available resources plus the resources held by all the Pj, with j
Processor :
A processor is the part a computer system that executes instructions .It is also called a CPU
Assembler :
An assembler is a program that takes basic computer instructions and converts them into a pattern of bits that the computer’s processor can use to perform its basic operations. Some people call these instructions assembler language and others use the term assembly language.
Compiler :
A compiler is a special program that processes statements written in a particular programming language and turns them into machine language or “code” that a computer’s processor uses. Typically, a programmer writes language statements in a language such as Pascal or C one line at a time using an editor. The file that is created contains what are called the source statements. The programmer then runs the appropriate language compiler, specifying the name of the file that contains the source statements.
Loader :
In a computer operating system, a loader is a component that locates a given program (which can be an application or, in some cases, part of the operating system itself) in offline storage (such as a hard disk), loads it into main storage (in a personal computer, it’s called random access memory), and gives that program control of the compute
Linker:
Linker performs the linking of libraries with the object code to make the object code into an executable machine code
A hard real-time system guarantees that critical tasks complete on time. This goal requires that all delays in the system be bounded from the retrieval of the stored data to the time that it takes the operating system to finish any request made of it. A soft real time system where a critical real-time task gets priority over other tasks and retains that priority until it completes. As in hard real time systems kernel delays need to be bounded
The copying garbage collector basically works by going through live objects and copying them into a specific region in the memory. This collector traces through all the live objects one by one. This entire process is performed in a single pass. Any object that is not copied in memory is garbage.
The copying garbage collector can be implemented using semispaces by splitting the heap into two halves. Each half is a contiguous memory region. All the allocations are made from a single half of the heap only. When the specified heap is half full, the collector is immediately invoked and it copies the live objects into the other half of the heap. In this way, the first half of the heap then only contains garbage and eventually is overwritten in the next pass.
In broad terms paging is a memory management technique that allows a physical address space of a process to be non-contiguous.
Segmented paging has a certain set of advantages over pure segmentation such as:
The full form of SPOOL is simultaneous peripheral operations online. In Spooling, the data is gathered temporarily and executed by the program or device. For e.g., in a printer, we give different printing commands to the printer here. Spooling keeps all the jobs into a disk file and queues them in the order these printing commands were received. Spooling is based on the First in, First out principle. This means the request that comes first will be processed first, and the request which comes later will be processed after that.
In multiprogramming, multiple processes are loaded into its main memory (acquire the job pool) and the operating system picks task one by one and start executing them. The following process from the job pool is picked up when the program is not getting executed or requires I/O. In the case of multiple jobs in a ready state, which job to choose is decided through CPU Scheduling. It never leaves a CPU idle and maximizes CPU usage.
The free space is broken into different pieces when the processes are loaded and removed from the memory. These free spaces are there in a scattered way. So to store the process, we need to compact these scattered pieces of memory to form a large chunk of memory in case any large process comes in. This process of combining scattered fragments of memory is called compaction.
Programs are stored on secondary storage(hard disk), which is divided into fixed-sized blocks called pages, and in the same way, the main memory is divided into blocks of the same size(as pages) called frames. The “page” is the smallest unit of memory, managed by the computer’s operating system, either in a physical form or virtual.
So, in short, we can say Physical memory is divided into FRAME, and logical memory is divided into PAGE.
>1. Mutual Exclusion
When you have sharable resources, there will be no fight for resources, which will prevent deadlock conditions.
In this condition, a process holding resources is waiting for other resources. But there are ways to eliminate this condition:
Note: If the process releases all resources, that can lead to problems as there will be some processes that are not required by any other process.
Preemption means temporary interruption of program execution. Normally processes can’t be preempted. Suppose some process P1 wants a resource R1 held by process P2. This situation will lead to a deadlock state. But to avoid deadlock, the process P2 is preempted, and resource R1 could be released and given to process P1. Then we can prevent deadlock.
Note: This can lead to an inconsistent state.
We can eliminate this circular wait condition by giving priority to each resource. A process accesses the resources in the increasing order of priority. If a request is made for a low-priority resource(being held by some other resource), the request will be considered invalid.
>
Multi programming:
Multiprogramming is the technique of running several programs at a time using timesharing.It allows a computer to do several things at the same time. Multiprogramming creates logical parallelism. The concept of multiprogramming is that the operating system keeps several jobs in memory simultaneously. The operating system selects a job from the job pool and starts executing a job, when that job needs to wait for any i/o operations the CPU is switched to another job. So the main idea here is that the CPU is never idle.
Multi tasking:
Multitasking is the logical extension of multiprogramming .The concept of multitasking is quite similar to multiprogramming but difference is that the switching between jobs occurs so frequently that the users can interact with each program while it is running. This concept is also known as time-sharing systems. A time-shared operating system uses CPU scheduling and multiprogramming to provide each user with a small portion of time-shared system.
Multi threading:
An application typically is implemented as a separate process with several threads of control. In some situations a single application may be required to perform several similar tasks for example a web server accepts client requests for web pages, images, sound, and so forth. A busy web server may have several of clients concurrently accessing it. If the web server ran as a traditional single-threaded process, it would be able to service only one client at a time. The amount of time that a client might have to wait for its request to be serviced could be enormous.
So it is efficient to have one process that contains multiple threads to serve the same purpose. This approach would multithread the web-server process, the server would create a separate thread that would listen for client requests when a request was made rather than creating another process it would create another thread to service the request. So to get the advantages like responsiveness, Resource sharing economy and utilization of multiprocessor architectures multithreading concept can be used
Micro-Kernel: A micro-kernel is a minimal operating system that performs only the essential functions of an operating system. All other operating system functions are performed by system processes
Monolithic: A monolithic operating system is one where all operating system code is in a single executable image and all operating system code runs in system mode.
Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them. CPU scheduling decisions may take place when a process:
Scheduling under 1 and 4 is nonpreemptive. All other scheduling is preemptive.
Mutex is the short form for ‘Mutual Exclusion object’. A mutex allows multiplethreads for sharing the same resource. The resource can be file. A mutex with aunique name is created at the time of starting a program.
A mutex must be lockedfrom other threads, when any thread that needs the resource. When the data is nolonger used / needed, the mutex is set to unlock.
Switching the CPU to another process requires saving the state of the old process and loading the saved state for the new process. This task is known as a context switch.Contextswitch time is pure overhead, because the system does no useful work while switching. Its speed varies from machine to machine, depending on the memory speed, the number of registers which must be copied, the existed of special instructions(such as a single instruction to load or store all registers).
Preemptive scheduling: The preemptive scheduling is prioritized. The highest priority process should always be the process that is currently utilized.
Non-Preemptive scheduling: When a process enters the state of running, the state of that process is not deleted from the scheduler until it finishes its service time.
Primary memory is the main memory (Hard disk, RAM) where the operating systemresides.
Secondary memory can be external devices like CD, floppy magnetic discs etc.secondary storage cannot be directly accessed by the CPU and is also externalmemory storage.
Time taken for switching from one process to other is pure over head. Because thesystem does no useful work while switching. So one of the solutions is to go forthreading when ever possible.
The purpose of the lexical analyzer is to partition the input text, delivering a sequence of comments and basic symbols. Comments are character sequences to be ignored, while basic symbols are character sequences that correspond to terminal symbols of the grammar defining the phrase structure of the input
Paging is solution to external fragmentation problem which is to permit the logical address space of a process to be noncontiguous, thus allowing a process to be allocating physical memory wherever the latter is available.
Consider any system where people use some kind of resources and compete for them. The non-computer examples for preemptive scheduling the traffic on the single lane road if there is emergency or there is an ambulance on the road the other vehicles give path to the vehicles that are in need. The example for preemptive scheduling is people standing in queue for tickets.
When a process requests an available resource, system must decide if immediate allocation leaves the system in a safe state
Direct Access method is based on a disk model of a file, such that it is viewed as a numbered sequence of blocks or records. It allows arbitrary blocks to be read or written. Direct access is advantageous when accessing large amounts of information.
The best paging size varies from system to system, so there is no single best when it comes to page size. There are different factors to consider in order to come up with a suitable page size, such as page table, paging time, and its effect on the overall efficiency of the operating system.
VFS, or Virtual File System, separate file system generic operations from their implementation by defining a clean VFS interface. It is based on a file-representation structure known as vnode, which contains a numerical designator needed to support network file systems.
I/O status information provides information about which I/O devices are to be allocated for a particular process. It also shows which files are opened, and other I/O device state.
Multitasking is the process within an operating system that allows the user to run several applications at the same time. However, only one application is active at a time for user interaction, although some applications can run “behind the scene”.
The concept of a logical address space that is bound to a separate physical address space is central to proper memory management. .
Logical and physical addresses are the same in compile-time and load-time address-binding schemes; logical (virtual) and physical addresses differ in execution-time address-binding scheme
Data registers:
Data registers can be assigned to a variety of functions by the programmer. They can be used with any machine instruction that performs operations on data.
Address registers :
Address registers contain main memory addresses of data and instructions or they contain a portion of the address that is used in the calculation of the complete addresses.
In batched operating system the users gives their jobs to the operator who sorts theprograms according to their requirements and executes them. This is time consuming but makes the CPU busy all the time.
The transaction process can be considered to be a series of read and write operations upon some data which is followed by a commit operation. By transaction atomicity it means that if a transaction is not completed successfully then the transaction must be aborted and any changes that the transactions did while execution must be roll backed. It means that a transaction must appear as a single operation that cannot be divided. This ensures that integrity of the data that is being updated is maintained. If the concept of atomicity in transaction is not used any transaction that is aborted midway may result in data to be inconsistent as there might be a possibility two transactions may be sharing the same data value.
Address binding of instructions and data to memory addresses can happen at three different stages
Compile time: If memory location known a priori, absolute code can be generated; must recompile code if starting location changes.
Load time: Must generate relocatable code if memory location is not known at compile time. Execution time: Binding delayed until run time if the process can be moved during its execution from one memory segment to another. Need hardware support for address maps (e.g., base and limit registers).
Dynamic Loading:
Dynamic Linking:
Overlays:
Fragmentation occurs in a dynamic memory allocation system when many of the free blocks are too small to satisfy any request.
There are two types of Fragmentation. These are External and Internal Fragmentation:
External Fragmentation : External Fragmentation happens when a dynamic memory allocation algorithm allocates some memory and a small piece is left over that cannot be effectively used. If too much external fragmentation occurs, the amount of usable memory is drastically reduced.Total memory space exists to satisfy a request, but it is not contiguous .
Internal Fragmentation: Internal fragmentation is the space wasted inside of allocated memory blocks because of restriction on the allowed sizes of allocated blocks.Allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used Reduce external fragmentation by compaction
Segments can be of different lengths, so it is harder to find a place for a segment in memory than a page. With segmented virtual memory, we get the benefits of virtual memory but we still have to do dynamic storage allocation of physical memory. In order to avoid this, it is possible to combine segmentation and paging into a two-level virtual memory system. Each segment descriptor points to page table for that segment.This give some of the advantages of paging (easy placement) with some of the advantages of segments (logical division of the program).
Demand Paging: Demand paging is the paging policy that a page is not read into memory until it is requested, that is, until there is a page fault on the page.
Page fault interrupt: A page fault interrupt occurs when a memory reference is made to a page that is not in memory.The present bit in the page table entry will be found to be off by the virtual memory hardware and it will signal an interrupt.
Trashing: The problem of many page faults occurring in a short time, called “page thrashing,”
Here we divide the processes into two types:
We can provide permission to a number of readers to read same data at same time.But a writer must be exclusively allowed to access. There are two solutions to this problem:
Demand paging is a method that loads pages into memory on demand. This method is mostly used in virtual memory. In this, a page is only brought into memory when a location on that particular page is referenced during execution. The following steps are generally followed:
Process synchronization is basically a way to coordinate processes that use shared resources or data. It is very much essential to ensure synchronized execution of cooperating processes so that will maintain data consistency. Its main purpose is to share resources without any interference using mutual exclusion. There are two types of process synchronization:
Main memory: Main memory in a computer is RAM (Random Access Memory). It is also known as primary memory or read-write memory or internal memory. The programs and data that the CPU requires during the execution of a program are stored in this memory.
Secondary memory:Secondary memory in a computer are storage devices that can store data and programs. It is also known as external memory or additional memory or backup memory or auxiliary memory. Such storage devices are capable of storing high-volume data. Storage devices can be hard drives, USB flash drives, CDs, etc.
Primary Memory | Secondary Memory |
---|---|
Data can be directly accessed by the processing unit. | Firstly, data is transferred to primary memory and after then routed to the processing unit. |
It can be both volatile and non-volatile in nature. | It is non-volatile in nature. |
It is more costly than secondary memory. | It is more cost-effective or less costly than primary memory. |
It is temporary because data is stored temporarily. | It is permanent because data is stored permanently. |
In this memory, data can be lost whenever there is a power failure. | In this memory, data is stored permanently and therefore cannot be lost even in case of power failure. |
It is much faster than secondary memory and saves data that is currently used by the computer. | It is slower as compared to primary memory and saves different kinds of data in different formats. |
It can be accessed by data. | It can be accessed by I/O channels. |
Process: It is basically a program that is currently under execution by one or more threads. It is a very important part of the modern-day OS.
Thread: It is a path of execution that is composed of the program counter, thread id, stack, and set of registers within the process.
Process | Thread |
---|---|
It is a computer program that is under execution. | It is the component or entity of the process that is the smallest execution unit. |
These are heavy-weight operators. | These are lightweight operators. |
It has its own memory space. | It uses the memory of the process they belong to. |
It is more difficult to create a process as compared to creating a thread. | It is easier to create a thread as compared to creating a process. |
It requires more resources as compared to thread. | It requires fewer resources as compared to processes. |
It takes more time to create and terminate a process as compared to a thread. | It takes less time to create and terminate a thread as compared to a process. |
It usually run-in separate memory space. | It usually run-in shared memory space. |
It does not share data. | It shares data with each other. |
It can be divided into multiple threads. | It can’t be further subdivided. |
MicroKernel: It is a minimal OS that executes only important functions of OS. It only contains a near-minimum number of features and functions that are required to implement OS.
Example: QNX, Mac OS X, K42, etc.
Monolithic Kernel: It is an OS architecture that supports all basic features of computer components such as resource management, memory, file, etc.
Example: Solaris, DOS, OpenVMS, Linux, etc.
MicroKernel | Monolithic Kernel |
---|---|
In this software or program, kernel services and user services are present in different address spaces. | In this software or program, kernel services and user services are usually present in the same address space. |
It is smaller in size as compared to the monolithic kernel. | It is larger in size as compared to a microkernel. |
It is easily extendible as compared to a monolithic kernel. | It is hard to as extend as compared to a microkernel. |
If a service crashes, it does affect on working of the microkernel. | If a service crashes, the whole system crashes in a monolithic kernel. |
It uses message queues to achieve inter-process communication. | It uses signals and sockets to achieve inter-process communication. |
The kernel is basically a computer program usually considered as a central component or module of OS. It is responsible for handling, managing, and controlling all operations of computer systems and hardware. Whenever the system starts, the kernel is loaded first and remains in the main memory. It also acts as an interface between user applications and hardware.
unctions of Kernel:
The socket in OS is generally referred to as an endpoint for IPC (Interprocess Communication). Here, the endpoint is referred to as a combination of an IP address and port number. Sockets are used to make it easy for software developers to create network-enabled programs. It also allows communication or exchange of information between two different processes on the same or different machines. It is mostly used in client-server-based systems.
Types of Sockets
There are basically four types of sockets as given below:
Asymmetric Clustering is generally a system in which one of the nodes among all nodes is in hot standby mode whereas the rest of all nodes run different applications. It simply uses whole or entire hardware resources therefore it is considered a more reliable system as compared to others.
It refers to the ability to execute or perform more than one program on a single processor machine. This technique was introduced to overcome the problem of underutilization of CPU and main memory. In simple words, it is the coordination of execution of various programs simultaneously on a single processor (CPU). The main objective of multiprogramming is to have at least some processes running at all times. It simply improves the utilization of the CPU as it organizes many jobs where the CPU always has one to execute.
Paging: It is generally a memory management technique that allows OS to retrieve processes from secondary storage into main memory. It is a non-contiguous allocation technique that divides each process in the form of pages.
Segmentation: It is generally a memory management technique that divides processes into modules and parts of different sizes. These parts and modules are known as segments that can be allocated to process.
Paging | Segmentation |
---|---|
It is invisible to a programmer. | It is visible to a programmer. |
In this, the size of pages is fixed. | In this, the size of segments is not fixed. |
Procedures and data cannot be separated in paging. | Procedures and data can be separated in segmentation. |
It allows a cumulative total of virtual address spaces to cross physical main memory. | It allows all programs, data, and codes to break up into independent address spaces. |
It is mostly available on CPUs and MMU chips. | It is mostly available on Windows servers that may support backward compatibility, while Linux has limited support. |
It is faster for memory access as compared to segmentation. | It is slower as compared to paging. |
In this, OS needs to maintain a free frame. | In this, OS needs to maintain a list of holes in the main memory. |
In paging, the type of fragmentation is internal. | In segmentation, the type of fragmentation is external. |
The size of the page is determined by available memory. | The size of the page is determined by the user. |
FCFS (First Come First Serve) is a type of OS scheduling algorithm that executes processes in the same order in which processes arrive. In simple words, the process that arrives first will be executed first. It is non-preemptive in nature. FCFS scheduling may cause the problem of starvation if the burst time of the first process is the longest among all the jobs. Burst time here means the time that is required in milliseconds by the process for its execution. It is also considered the easiest and simplest OS scheduling algorithm as compared to others. Implementation of FCFS is generally managed with help of the FIFO (First In First Out) queue.