Operating System - Interview Questions & Answers for Freshers.

Top Interview Questions and Answers you need to know as a Freshers

If you are preparing for a Operating System interview, then you have reached the right place.

Computer Science Engineering is a broad field of study that deals with the Operating System.

It is a fast-growing field that has many opportunities for career growth. A Operating System interview is a type of interview that is designed to assess a candidate's knowledge of Computer Science Engineering . The purpose of the interview is to evaluate the candidate's knowledge and deep understanding of subject.

The interview may also assess the candidate's communication skills, such as the ability to present complex information in a clear and concise manner.

The Interview is typically conducted by a hiring manager or recruiter who has experience in the field. The interviewer will typically ask a series of questions about the candidate's background and experience. The interviewer will also ask about the candidate's strengths and weaknesses.

This list of interview questions in Operating System includes basic-level, advanced-level, and program-based interview questions.

Here are the commonly asked question list of Operating System (Computer Science Engineering) interview questions and answers that you must prepare for fresher as well as experienced candidates to get your dream job.

1 Explain the main purpose of an operating system? Name different operating system.

The machine Purpose Workstation individual usability &Resources utilization Mainframe Optimize utilization of hardware PC Support complex games, business application Hand held PCs Easy interface & min. power consumption

The different operating systems are

  • Batched operating systems
  • Multi-programmed operating systems
  • Timesharing operating systems
  • Distributed operating systems
  • Real-time operating systems

2 What is an operating system?

An operating system is a program that acts as an intermediary between the user and the computer hardware. The purpose of an OS is to provide a convenient environment in which user can execute programs in a convenient and efficient manner.It is a resource allocator responsible for allocating system resources and a control program which controls the operation of the computer hardware.

An Operating system is an essential part in any computer system. There is a huge demand for OS developers in the IT industry.

3 What are the main functions of an operating system?

The main function of operating sytem are  

  • It makes sure that a computer system act as a manager performing well by managing its computational activities.
  • It helps in the execution of programs.
  • Memory management
  • Device management

4 What Is Virtual Memory?

Virtual memory is a memory management technique for letting processes execute outside of memory. This is very useful, especially if an executing program cannot fit in the physical memory.

Virtual memory is a volatile memory(temporary)created on a storage drive. Usually, when we open many applications or work on those applications, the RAM is required to execute these applications. But what if RAM is low still you manage to work on those applications. How? In that case, the operating system takes part of the secondary storage device(hard disk), which will now act as RAM until the processes are running.

5 What is a deadlock?

Deadlock is a blockage situation where each process is waiting for another resource that is acquired by some other process and is holding a resource already. In short, they are waiting for another process to get finished and release the resource.

Consider an example when two cars are coming toward each other on the same track, and there is only one track. None of them can move once they are in front of each other. These two cars will be in a deadlock situation. 

6 What is a thread?

A thread is a lightweight process having some of the properties of processes. A single process can have different threads. Threads introduce parallelism and improve the application through it. For example, multiple tabs can be different threads in a browser. MS word uses multiple threads. One thread is for formatting the text. Another thread is to process inputs, etc. 

7 What are the advantages of multiprocessor system?

Systems which have more than one processor are called multiprocessor system. These systems are also known as parallel systems or tightly coupled systems. The Multiprocessor systems have the following advantages.

  • Increased Throughput: Multiprocessor systems have better performance than single processor systems. It has shorter response time and higher throughput. User gets more work in less time.
  • Reduced Cost: Multiprocessor systems can cost less than equivalent multiple single processor systems. They can share resources such as memory, peripherals etc.
  • Increased reliability: Multiprocessor systems have more than one processor, so if one processor fails, complete system will not stop. In these systems, functions are divided among the different processors.

8 Explain the concept of the batched operating systems?

In batched operating system the users gives their jobs to the operator who sorts the programs according to their requirements and executes them. This is time consuming but makes the CPU busy all the time.

9 Explain the concept of the multi-programmed operating systems?

A multi-programmed operating systems can execute a number of programs concurrently. The operating system fetches a group of programs from the job-pool in the secondary storage which contains all the programs to be executed, and places them in the main memory. This process is called job scheduling. Then it chooses a program from the ready queue and gives them to CPU to execute. When a executing program needs some I/O operation then the operating system fetches another program and hands it to the CPU for execution, thus keeping the CPU busy all the time.

10 What is a real-time system?

Real-time system is used in the case when rigid-time requirements have been placed on the operation of a processor. It contains a well defined and fixed time constraints.

11 What is kernel? Explain monolithic kernel

Kernel is the core and most important part of a computer operating system which provides basic services for all parts of the OS.

A monolithic kernel is a kernel which includes all operating system code is in single executable image.

 

12 What do you mean by a process and different states of a process

An executing program is known as process. There are two types of processes:

  • Operating System Processes
  • User Processes

A list of different states of process:

  • New State: It is the first State of a process. This is the state where the process is just created.
  • Ready State: After process creation, the process is supposed to be executed. But it is first put into the ready queue, where it waits for its turn to get executed. 
  • Ready Suspended State: Sometimes, when many processes come into a ready state, then due to memory constraints, some processes are shifted from a ready State to a ready suspended State.
  • Running State: One process from the ready state queue is put into the running state queue by the CPU for execution. And that process will be now in a running state.
  • Waiting or Blocked state: If during execution, the process wants to do I/O operation like writing on file, or some more priority process might come. Then the running process will go in a blocked or waiting state.

13 What are the benefits of a multiprocessor system?

A Multiprocessor system is a type of system that includes two or more CPUs. It involves the processing of different computer programs at the same time mostly by a computer system with two or more CPUs that are sharing single memory. 

Benefits:

  • Such systems are used widely nowadays to improve performance in systems that are running multiple programs concurrently. 
  • By increasing the number of processors, a greater number of tasks can be completed in unit time. 
  • One also gets a considerable increase in throughput and is cost-effective also as all processors share the same resources.
  • It simply improves the reliability of the computer system.

14 Explain the concept of the multi-processor systems or parallel systems?

They contain a no. of processors to increase the speed of execution, and reliability, and economy.

They are of two types:

  • Symmetric multiprocessing :  In Symmetric multi processing each processor run an identical copy of the OS, and these copies communicate with each other as and when needed.
  • Asymmetric multiprocessing : In Asymmetric multiprocessing each processor is assigned a specific task.

15 Describe Main memory and Secondary memory storage in brief.

Main memory is also called random access memory (RAM). CPU can access Main memory directly. Data access from main memory is much faster than Secondary memory. It is implemented in a semiconductor technology, called dynamic random-access memory (DRAM).

Main memory is usually too small to store all needed programs. It is a volatile storage device that loses its contents when power is turned off. Secondary memory can stores large amount of data and programs permanently. Magnetic disk is the most common secondary storage device. If a user wants to execute any program it should come from secondary memory to main memory because CPU can access main memory directly.

16 What are the main functions of a Kernel?

The main functions of a Kernel are:

  • Process management
  • Device management
  • Memory management
  • Interrupt handling
  • I/O communication
  • File system management

17 Explain the concept of Real-time operating systems?

A real time operating system is used when rigid time requirement have been placed on the operation of a processor or the flow of the data; thus, it is often used as a control device in a dedicated application. Here the sensors bring data to the computer. The computer must analyze the data and possibly adjust controls to modify the sensor input.

They are of two types:

  • Hard real time OS : Hard-real-time OS has well-defined fixed time constraints.
  • Soft real time OS : Soft real time operating systems have less stringent timing constraints.

18 Explain the concept of the timesharing operating systems?

It is a logical extension of the multi-programmed OS where user can interact with the program. The CPU executes multiple jobs by switching among them, but the switches occur so frequently that the user feels as if the operating system is running only his program.

19 What is process synchronization?

A situation, where several processes access and manipulate the same dataconcurrently and the outcome of the execution depends on the particular order in which the access takes place, is called race condition. To guard against the racecondition we need to ensure that only one process at a time can be manipulating thesame data. The technique we use for this is called process synchronization.

20 What is thrashing?

It is a phenomenon in virtual memory schemes when the processor spends most of itstime swapping pages, rather than executing instructions. This is due to an inordinatenumber of page faults.

21 What is fragmentation? Tell about different types of fragmentation?

When many of free blocks are too small to satisfy any request then fragmentationoccurs. External fragmentation and internal fragmentation are two types of fragmentation. External Fragmentation happens when a dynamic memory allocationalgorithm allocates some memory and a small piece is left over that cannot beeffectively used. Internal fragmentation is the space wasted inside of allocatedmemory blocks because of restriction on the allowed sizes of allocated blocks.

22 What is cache memory?

Cache memory is random access memory (RAM) that a computer microprocessor canaccess more quickly than it can access regular RAM. As the microprocessor processesdata, it looks first in the cache memory and if it finds the data there (from a previousreading of data), it does not have to do the more time-consuming reading of data fromlarger memory.

23 Differentiate between Complier and Interpreter?

An interpreter reads one instruction at a time and carries out the actions implied by that instruction. It does not perform any translation. But a compiler translates theentire instructions

24 What is cache-coherency?

In a multiprocessor system there exist several caches each may containing a copy of same variable A. Then a change in one cache should immediately be reflected in all other caches this process of maintaining the same value of a data in all the caches s called cache-coherency.

25 hat is synchronization? What are the different synchronization mechanisms?

Synchronization means controlling access to a resource that is available to two or more threads or process. Different synchronization mechanisms are:

  • Mutex
  • Semaphores
  • Monitors
  • Condition variables
  • Critical regions
  • Read/ Write locks

26 What is the basic difference between pre-emptive and non-pre-emptive scheduling.

Pre-emptive scheduling allows interruption of a process while it is executing and taking the CPU to another process while non-pre-emptive scheduling ensures that a process keeps the CPU under control until it has completed execution.

27 What are the four necessary and sufficient conditions behind the deadlock?

These are the 4 conditions:

  • Mutual Exclusion Condition: It specifies that the resources involved are non-sharable.
  • Hold and Wait Condition: It specifies that there must be a process that is holding a resource already allocated to it while waiting for additional resource that are currently being held by other processes.
  • No-Preemptive Condition: Resources cannot be taken away while they are being used by processes.
  • Circular Wait Condition: It is an explanation of the second condition. It specifies that the processes in the system form a circular list or a chain where each process in the chain is waiting for a resource held by next process in the chain.

28 Which are the necessary conditions to achieve a deadlock?

There are 4 necessary conditions to achieve a deadlock:

  • Mutual Exclusion: At least one resource must be held in a non-sharable mode. If any other process requests this resource, then that process must wait for the resource to be released.
  • Hold and Wait: A process must be simultaneously holding at least one resource and waiting for at least one resource that is currently being held by some other process.
  • No preemption: Once a process is holding a resource ( i.e. once its request has been granted ), then that resource cannot be taken away from that process until the process voluntarily releases it.
  • Circular Wait: A set of processes { P0, P1, P2, . . ., PN } must exist such that every P[ i ] is waiting for P[ ( i + 1 ) % ( N + 1 ) ].

29 What is RAID? What are the different RAID levels?

A redundant Array of Independent Disks(RAID) is used to store the same data redundantly to improve the overall performance. Following are the different RAID levels:

RAID 0 – Striped Disk Array without fault tolerance.

In this data is stripped between different disks and you can access data at the It offers the best performance, but it does not provide fault tolerance.

RAID 1 – Mirroring and duplexing

This provides fault tolerance as data is stored on different disks. If one fails then data can be accessed from another drive.

RAID 3 – Bit-interleaved Parity

Raid 3 is not used much. Data is divided evenly and stored on two or more disks, plus there is a dedicated drive for parity storage.

RAID 5 – Block-interleaved distributed Parity

Data is divided evenly and stored on two or more disks, plus parity is distributed in different drives.

RAID 6 – P+Q Redundancy.

Data is divided evenly and stored on two or more disks, plus parity is distributed in two different drives.

30 What is Banker’s algorithm?

It is the deadlock-avoidance method. It is named on the banking system where the bank allocates available cash in such a manner that it can satisfy the requirements of all of its customers by subtracting the loan amount from the total amount of money the bank has. In the same way in the operating system, when a new process is to be executed, it requires some resources.

Banker's algorithm is used to avoid deadlock. It is the one of deadlock-avoidance method. It is named as Banker's algorithm on the banking system where bank never allocates available cash in such a manner that it can no longer satisfy the requirements of all of its customers.

So banker’s algorithm needs to know how many resources the process could possibly request, how many resources are being held by processes and how many resources the system has. And accordingly, resources are being assigned if available resources are more than requested resources in order to avoid deadlock.

 

31 Difference between logical from physical address space.

Logical address : A logical address is a virtual address generated by the CPU.

Physical address : A physical address is generated by the MMU(Memory management unit).

Logical address : It is the address of data or instruction used by the program.

Physical address : A physical address is a location in a memory unit mapped to the corresponding logical addresses.

Logical address : Is not visible to the user

Physical address : Is visible to the user

32 What is a multiprocessor system, and what are the advantages of using it.

When a computer system has two or more CPUs, all CPUs share RAM. This system is also known as a tightly coupled or parallel system. The process is divided into different subprocesses, and other processors get different subprocesses to work.

Advantages of multiprocessor system

  1. Increased Throughput: Throughput will increase on using different processors at a time. This leads to shorter response times, and more work is done in less time.
  2. Reduced Cost: Using Multiprocessor systems is more cost-effective than using multiple single processors as it shares resources like devices and memory.
  3. Increased reliability: Multiprocessor systems are more reliable, so the system will still work if one processor fails. 

33 What is fragmentation and its types of fragmentation

Fragmentation is a phenomenon of memory wastage. It reduces the capacity and performance because space is used inefficiently.

There are two types of fragmentation:

  • Internal fragmentation: It is occurred when we deal with the systems that have fixed size allocation units.
  • External fragmentation: It is occurred when we deal with systems that have variable-size allocation units.

34 Explain the main functions of file management in OS?

The main functions of file management in Operating system are

Creating and deleting files/ directories-It maintains an optimal file structure for storing files in the memory.

Providing I/O support for multi-users-It provides devices for inputting and outputting data.

Ensuring data in the file is correct-It ensures that the data is not corrupted before storing it.

Back up and restoration of files on storage media-It takes backup of memory and can restore the file in case it is deleted.

Tracking data location-A file is not stored in a contiguous way. It can be stored in different blocks on the disk(non-contiguous way). So there is a need to track in which block which part of the file is stored.

Storing data-It decides where to store the data in memory.

35 How does dynamic loading aid in better memory space utilization?

With dynamic loading, a routine is not loaded until it is called. This method is especially useful when large amounts of code are needed in order to handle infrequently occurring cases such as error routines

36 What is the basic function of paging?

Paging is a memory management scheme that permits the physical address space of a process to be noncontiguous. It avoids the considerable problem of having to fit varied sized memory chunks onto the backing store.

37 What are multithreading models?

Many Operating System provide both kernel threading and user threading. They are called multithreading models. They are of three types:

  • Many-to-one model (many user level thread and one kernel thread). : In the first model only one user can access the kernel thread by not allowing multi-processing. Example: Green threads of Solaris.
  • One-to-one model : The second model allows multiple threads to run on parallel processing systems. Creating user thread needs to create corresponding kernel thread (disadvantage).Example: Windows NT, Windows 2000, OS/2.
  • Many-to –many : The third model allows the user to create as many threads as necessary and the corresponding kernel threads can run in parallel on a multiprocessor. Example: Solaris2, IRIX, HP-UX, and Tru64 Unix.

38 What are the advantage and disadvantages of threads?

The main advantages of using threads are:

  • No special communication mechanism is required.
  • Readability and simplicity of program structure increases with threads.
  • System becomes more efficient with less requirement of system resources.

The main disadvantages of using threads are:

  • Threads can not be re-used as they exist within a single process.
  • They corrupt the address space of their process.
  • They need synchronization for concurrent read-write access to memory.

39 What are the different types of scheduling algorithms?

The scheduling algorithms decide which processes in the ready queue are to be allocated to the CPU for execution. Scheduling algorithms can be broadly classified on the basis of:

Preemptive algorithms: In this type of scheduling a process maybe interrupted during execution and the CPU maybe allocated to another process.

Non-Preemptive algorithms: In this type of scheduling once a CPU has been allocated to a process it would not release the CPU till a request for termination or switching to waiting state occurs.

Round Robin Scheduling :  The round robin algorithm works on the concept of time slice or also known as quantum. In this algorithm, every process is given a predefined amount of time to complete the process. In case, a process is not completed in its predefined time then it is assigned to the next process waiting in queue. In this way, a continuous execution of processes is maintained which would not have been possible in case of FCFS algorithm

First Come First Served Scheduling: First come first serve (FCFS) scheduling algorithm simply schedules the jobs according to their arrival time. The job which comes first in the ready queue will get the CPU first. The lesser the arrival time of the job, the sooner will the job get the CPU

Shortest Job First Scheduling : The shortest job first (SJF) or shortest job next, is a scheduling policy that selects the waiting process with the smallest execution time to execute next. SJN, also known as Shortest Job Next (SJN), can be preemptive or non-preemptive.

40 What is a long term scheduler & short term schedulers?

Long term schedulers are the job schedulers that select processes from the job queueand load them into memory for execution. The short term schedulers are the CPUschedulers that select a process from the ready queue and allocate the CPU to one of them

41 What are deadlock prevention techniques?

Mutual exclusion : Some resources such as read only files shouldn’t be mutually exclusive. They should be sharable. But some resources such as printers must be mutually exclusive.

Hold and wait : To avoid this condition we have to ensure that if a process is requesting for a resource it should not hold any resources.

No preemption : If a process is holding some resources and requests another resource that cannot be immediately allocated to it (that is the process must wait), then all the resources currently being held are preempted(released autonomously).

Circular wait : the way to ensure that this condition never holds is to impose a total ordering of all the resource types, and to require that each process requests resources in an increasing order of enumeration.

42 What is a safe state and a safe sequence?

A system is in safe state only if there exists a safe sequence. A sequence of processes is a safe sequence for the current allocation state if, for each Pi, the resources that the Pi can still request can be satisfied by the currently available resources plus the resources held by all the Pj, with j

43 .Explain briefly about, processor, assembler, compiler, loader, linker and the functions executed by them.

Processor :

A processor is the part a computer system that executes instructions .It is also called a CPU

Assembler : 

An assembler is a program that takes basic computer instructions and converts them into a pattern of bits that the computer’s processor can use to perform its basic operations. Some people call these instructions assembler language and others use the term assembly language.

Compiler :

A compiler is a special program that processes statements written in a particular programming language and turns them into machine language or “code” that a computer’s processor uses. Typically, a programmer writes language statements in a language such as Pascal or C one line at a time using an editor. The file that is created contains what are called the source statements. The programmer then runs the appropriate language compiler, specifying the name of the file that contains the source statements.

Loader : 

In a computer operating system, a loader is a component that locates a given program (which can be an application or, in some cases, part of the operating system itself) in offline storage (such as a hard disk), loads it into main storage (in a personal computer, it’s called random access memory), and gives that program control of the compute

Linker:

Linker performs the linking of libraries with the object code to make the object code into an executable machine code

44 What is the difference between Hard and Soft real-time systems?

A hard real-time system guarantees that critical tasks complete on time. This goal requires that all delays in the system be bounded from the retrieval of the stored data to the time that it takes the operating system to finish any request made of it. A soft real time system where a critical real-time task gets priority over other tasks and retains that priority until it completes. As in hard real time systems kernel delays need to be bounded

45 Explain how a copying garbage collector works. How can it be implemented using semispaces?

The copying garbage collector basically works by going through live objects and copying them into a specific region in the memory. This collector traces through all the live objects one by one. This entire process is performed in a single pass. Any object that is not copied in memory is garbage.

The copying garbage collector can be implemented using semispaces by splitting the heap into two halves. Each half is a contiguous memory region. All the allocations are made from a single half of the heap only. When the specified heap is half full, the collector is immediately invoked and it copies the live objects into the other half of the heap. In this way, the first half of the heap then only contains garbage and eventually is overwritten in the next pass.

46 State the advantages of segmented paging over pure segmentation?

In broad terms paging is a memory management technique that allows a physical address space of a process to be non-contiguous.

Segmented paging has a certain set of advantages over pure segmentation such as:

  • Segmented paging does not have any source of external fragmentation.
  • Since a segment existence is not restricted to a contiguous memory range it can be easily grown and does not have to adjust into a physical memory medium.
  • With segmented paging the addition of an offset and a base is simpler as it is only an append operation instead of it being a full addition operation.

47 What is spooling?

The full form of SPOOL is simultaneous peripheral operations online. In Spooling, the data is gathered temporarily and executed by the program or device. For e.g., in a printer, we give different printing commands to the printer here. Spooling keeps all the jobs into a disk file and queues them in the order these printing commands were received. Spooling is based on the First in, First out principle. This means the request that comes first will be processed first, and the request which comes later will be processed after that. 

48 What is multiprogramming, and what is the main objective.

In multiprogramming, multiple processes are loaded into its main memory (acquire the job pool) and the operating system picks task one by one and start executing them. The following process from the job pool is picked up when the program is not getting executed or requires I/O. In the case of multiple jobs in a ready state, which job to choose is decided through CPU Scheduling. It never leaves a CPU idle and maximizes CPU usage.

Objectives of multiprogramming 

  • It manages the resources of the system. 
  • A multiprogramming operating system can execute multiple programs using only one processor machine. 

49 Explain compaction.

The free space is broken into different pieces when the processes are loaded and removed from the memory. These free spaces are there in a scattered way. So to store the process, we need to compact these scattered pieces of memory to form a large chunk of memory in case any large process comes in. This process of combining scattered fragments of memory is called compaction.

50 What is the difference between a page and a frame?

Programs are stored on secondary storage(hard disk), which is divided into fixed-sized blocks called pages, and in the same way, the main memory is divided into blocks of the same size(as pages) called frames. The “page” is the smallest unit of memory, managed by the computer’s operating system, either in a physical form or virtual. 

So, in short, we can say Physical memory is divided into FRAME, and logical memory is divided into PAGE.

51 How to prevent deadlock?

>1. Mutual Exclusion

When you have sharable resources, there will be no fight for resources, which will prevent deadlock conditions.

2. Hold and Wait

In this condition, a process holding resources is waiting for other resources. But there are ways to eliminate this condition:

  1. When process beforehand specifies the resources it requires. In this way, the process doesn’t have to wait for allocation. 
  2. By releasing all the resources the process is currently holding and then making a new request. 

Note: If the process releases all resources, that can lead to problems as there will be some processes that are not required by any other process.

3. No preemption

Preemption means temporary interruption of program execution. Normally processes can’t be preempted. Suppose some process P1 wants a resource R1 held by process P2. This situation will lead to a deadlock state. But to avoid deadlock, the process P2 is preempted, and resource R1 could be released and given to process P1. Then we can prevent deadlock.

Note: This can lead to an inconsistent state.

4. Circular Wait

We can eliminate this circular wait condition by giving priority to each resource. A process accesses the resources in the increasing order of priority. If a request is made for a low-priority resource(being held by some other resource), the request will be considered invalid.

52 What is Throughput, Turnaround time, waiting time and Response time?

>

  • Throughput – number of processes that complete their execution per time unit
  • Turnaround time – amount of time to execute a particular process
  • Waiting time – amount of time a process has been waiting in the ready queue
  • Response time – amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment)
  • 53 What is multi tasking, multi programming, multi threading?

    Multi programming:

    Multiprogramming is the technique of running several programs at a time using timesharing.It allows a computer to do several things at the same time. Multiprogramming creates logical parallelism. The concept of multiprogramming is that the operating system keeps several jobs in memory simultaneously. The operating system selects a job from the job pool and starts executing a job, when that job needs to wait for any i/o operations the CPU is switched to another job. So the main idea here is that the CPU is never idle.

    Multi tasking:

    Multitasking is the logical extension of multiprogramming .The concept of multitasking is quite similar to multiprogramming but difference is that the switching between jobs occurs so frequently that the users can interact with each program while it is running. This concept is also known as time-sharing systems. A time-shared operating system uses CPU scheduling and multiprogramming to provide each user with a small portion of time-shared system.

    Multi threading:

    An application typically is implemented as a separate process with several threads of control. In some situations a single application may be required to perform several similar tasks for example a web server accepts client requests for web pages, images, sound, and so forth. A busy web server may have several of clients concurrently accessing it. If the web server ran as a traditional single-threaded process, it would be able to service only one client at a time. The amount of time that a client might have to wait for its request to be serviced could be enormous.

    So it is efficient to have one process that contains multiple threads to serve the same purpose. This approach would multithread the web-server process, the server would create a separate thread that would listen for client requests when a request was made rather than creating another process it would create another thread to service the request. So to get the advantages like responsiveness, Resource sharing economy and utilization of multiprocessor architectures multithreading concept can be used

    54 Explain the difference between microkernel and macro kernel?

    Micro-Kernel: A micro-kernel is a minimal operating system that performs only the essential functions of an operating system. All other operating system functions are performed by system processes

    Monolithic: A monolithic operating system is one where all operating system code is in a single executable image and all operating system code runs in system mode.

    55 What is CPU Scheduler?

    Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them. CPU scheduling decisions may take place when a process:

    • Switches from running to waiting state.
    • Switches from running to ready state.
    • Switches from waiting to ready.
    • Terminates.

    Scheduling under 1 and 4 is nonpreemptive. All other scheduling is preemptive.

    56 Explain the meaning of mutex.

    Mutex is the short form for ‘Mutual Exclusion object’. A mutex allows multiplethreads for sharing the same resource. The resource can be file. A mutex with aunique name is created at the time of starting a program.

    A mutex must be lockedfrom other threads, when any thread that needs the resource. When the data is nolonger used / needed, the mutex is set to unlock.

    57 What is Context Switch?

    Switching the CPU to another process requires saving the state of the old process and loading the saved state for the new process. This task is known as a context switch.Contextswitch time is pure overhead, because the system does no useful work while switching. Its speed varies from machine to machine, depending on the memory speed, the number of registers which must be copied, the existed of special instructions(such as a single instruction to load or store all registers).

    58 What is pre-emptive and non-preemptive scheduling?

    Preemptive scheduling: The preemptive scheduling is prioritized. The highest priority process should always be the process that is currently utilized.

    Non-Preemptive scheduling: When a process enters the state of running, the state of that process is not deleted from the scheduler until it finishes its service time.

    59 Difference between Primary storage and secondary storage?

    Primary memory is the main memory (Hard disk, RAM) where the operating systemresides.

    Secondary memory can be external devices like CD, floppy magnetic discs etc.secondary storage cannot be directly accessed by the CPU and is also externalmemory storage.

    60 What are the disadvantages of context switching?

    Time taken for switching from one process to other is pure over head. Because thesystem does no useful work while switching. So one of the solutions is to go forthreading when ever possible.

    61 What are different tasks of Lexical Analysis?

    The purpose of the lexical analyzer is to partition the input text, delivering a sequence of comments and basic symbols. Comments are character sequences to be ignored, while basic symbols are character sequences that correspond to terminal symbols of the grammar defining the phrase structure of the input

    62 Why paging is used?

    Paging is solution to external fragmentation problem which is to permit the logical address space of a process to be noncontiguous, thus allowing a process to be allocating physical memory wherever the latter is available.

    63 Give a non-computer example of preemptive and non-preemptive scheduling?

    Consider any system where people use some kind of resources and compete for them. The non-computer examples for preemptive scheduling the traffic on the single lane road if there is emergency or there is an ambulance on the road the other vehicles give path to the vehicles that are in need. The example for preemptive scheduling is people standing in queue for tickets.

    64 What is a Safe State and its’ use in deadlock avoidance?

    When a process requests an available resource, system must decide if immediate allocation leaves the system in a safe state

    • System is in safe state if there exists a safe sequence of all processes.
    • Sequence is safe if for each Pi, the resources that Pi can still request can be satisfied by currently available resources + resources held by all the Pj, with j.
    • If Pi resource needs are not immediately available, then Pi can wait until all Pj have finished.
    • When Pj is finished, Pi can obtain needed resources, execute, return allocated resources, and terminate. When Pi terminates, Pi + 1 can obtain its needed resources, and so on.
    • Deadlock Avoidance Þ ensure that a system will never enter an unsafe state.

    65 What is Direct Access Method?

    Direct Access method is based on a disk model of a file, such that it is viewed as a numbered sequence of blocks or records. It allows arbitrary blocks to be read or written. Direct access is advantageous when accessing large amounts of information.

    66 What is the best page size when designing an operating system?

    The best paging size varies from system to system, so there is no single best when it comes to page size. There are different factors to consider in order to come up with a suitable page size, such as page table, paging time, and its effect on the overall efficiency of the operating system.

    67 What are the primary functions of VFS?

    VFS, or Virtual File System, separate file system generic operations from their implementation by defining a clean VFS interface. It is based on a file-representation structure known as vnode, which contains a numerical designator needed to support network file systems.

    68 What is the purpose of an I/O status information?

    I/O status information provides information about which I/O devices are to be allocated for a particular process. It also shows which files are opened, and other I/O device state.

    69 What is multitasking?

    Multitasking is the process within an operating system that allows the user to run several applications at the same time. However, only one application is active at a time for user interaction, although some applications can run “behind the scene”.

    70 Difference between Logical and Physical Address Space?

    The concept of a logical address space that is bound to a separate physical address space is central to proper memory management. .

    • Logical address : Logical address generated by the CPU; also referred to as virtual address.
    • Physical address : Physical addressseen by the memory unit

    Logical and physical addresses are the same in compile-time and load-time address-binding schemes; logical (virtual) and physical addresses differ in execution-time address-binding scheme

    71 What is a data register and address register?

    Data registers: 

    Data registers can be assigned to a variety of functions by the programmer. They can be used with any machine instruction that performs operations on data. 

    Address registers : 

    Address registers contain main memory addresses of data and instructions or they contain a portion of the address that is used in the calculation of the complete addresses.

    72 Explain the concept of the batched operating systems?

    In batched operating system the users gives their jobs to the operator who sorts theprograms according to their requirements and executes them. This is time consuming but makes the CPU busy all the time.

    73 What do you understand by transaction atomicity?

    The transaction process can be considered to be a series of read and write operations upon some data which is followed by a commit operation. By transaction atomicity it means that if a transaction is not completed successfully then the transaction must be aborted and any changes that the transactions did while execution must be roll backed. It means that a transaction must appear as a single operation that cannot be divided. This ensures that integrity of the data that is being updated is maintained. If the concept of atomicity in transaction is not used any transaction that is aborted midway may result in data to be inconsistent as there might be a possibility two transactions may be sharing the same data value.

    74 Binding of Instructions and Data to Memory?

    Address binding of instructions and data to memory addresses can happen at three different stages

    Compile time: If memory location known a priori, absolute code can be generated; must recompile code if starting location changes.

    Load time: Must generate relocatable code if memory location is not known at compile time. Execution time: Binding delayed until run time if the process can be moved during its execution from one memory segment to another. Need hardware support for address maps (e.g., base and limit registers).

    75 What are Dynamic Loading, Dynamic Linking and Overlays?

    Dynamic Loading:

    • Routine is not loaded until it is called
    • Better memory-space utilization; unused routine is never loaded.
    • Useful when large amounts of code are needed to handle infrequently occurring cases.
    • No special support from the operating system is required implemented through program design.

    Dynamic Linking:

    • Linking postponed until execution time.
    • Small piece of code, stub, used to locate the appropriate memory-resident library routine.
    • Stub replaces itself with the address of the routine, and executes the routine.
    • Operating system needed to check if routine is in processes’ memory address.
    • Dynamic linking is particularly useful for libraries.

    Overlays:

    • Keep in memory only those instructions and data that are needed at any given time.
    • Needed when process is larger than amount of memory allocated to it.
    • Implemented by user, no special support needed from operating system, programming design of overlay structure is complex

    76 What is fragmentation? Different types of fragmentation?

    Fragmentation occurs in a dynamic memory allocation system when many of the free blocks are too small to satisfy any request. 

    There are two types of Fragmentation. These are External and Internal Fragmentation:

    External Fragmentation : External Fragmentation happens when a dynamic memory allocation algorithm allocates some memory and a small piece is left over that cannot be effectively used. If too much external fragmentation occurs, the amount of usable memory is drastically reduced.Total memory space exists to satisfy a request, but it is not contiguous .

    Internal Fragmentation: Internal fragmentation is the space wasted inside of allocated memory blocks because of restriction on the allowed sizes of allocated blocks.Allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used Reduce external fragmentation by compaction

    • Shuffle memory contents to place all free memory together in one large block.
    • Compaction is possible only if relocation is dynamic, and is done at execution time.

    77 Explain Segmentation with paging?

    Segments can be of different lengths, so it is harder to find a place for a segment in memory than a page. With segmented virtual memory, we get the benefits of virtual memory but we still have to do dynamic storage allocation of physical memory. In order to avoid this, it is possible to combine segmentation and paging into a two-level virtual memory system. Each segment descriptor points to page table for that segment.This give some of the advantages of paging (easy placement) with some of the advantages of segments (logical division of the program).

    78 Define Demand Paging, Page fault interrupt, and Trashing?

    Demand Paging: Demand paging is the paging policy that a page is not read into memory until it is requested, that is, until there is a page fault on the page.

    Page fault interrupt: A page fault interrupt occurs when a memory reference is made to a page that is not in memory.The present bit in the page table entry will be found to be off by the virtual memory hardware and it will signal an interrupt.

    Trashing: The problem of many page faults occurring in a short time, called “page thrashing,”

    79 What is readers-writers problem?

    Here we divide the processes into two types:

    • Readers (Who want to retrieve the data only)
    • Writers (Who want to retrieve as well as manipulate)

    We can provide permission to a number of readers to read same data at same time.But a writer must be exclusively allowed to access. There are two solutions to this problem:

    • No reader will be kept waiting unless a writer has already obtained permission to use the shared object. In other words, no reader should wait for other readers to complete simply because a writer is waiting.
    • Once a writer is ready, that writer performs its write as soon as possible. In other words, if a writer is waiting to access the object, no new may start reading.

    80 Explain demand paging?

    Demand paging is a method that loads pages into memory on demand. This method is mostly used in virtual memory. In this, a page is only brought into memory when a location on that particular page is referenced during execution. The following steps are generally followed:

    • Attempt to access the page.
    • If the page is valid (in memory) then continue processing instructions as normal.
    • If a page is invalid then a page-fault trap occurs.
    • Check if the memory reference is a valid reference to a location on secondary memory. If not, the process is terminated (illegal memory access). Otherwise, we have to page in the required page.
    • Schedule disk operation to read the desired page into main memory.
    • Restart the instruction that was interrupted by the operating system trap.

    81 What do you mean by process synchronization?

    Process synchronization is basically a way to coordinate processes that use shared resources or data. It is very much essential to ensure synchronized execution of cooperating processes so that will maintain data consistency. Its main purpose is to share resources without any interference using mutual exclusion. There are two types of process synchronization:

    • Independent Process
    • Cooperative Process

    82 What is different between main memory and secondary memory.

    Main memory: Main memory in a computer is RAM (Random Access Memory). It is also known as primary memory or read-write memory or internal memory. The programs and data that the CPU requires during the execution of a program are stored in this memory.

    Secondary memory:Secondary memory in a computer are storage devices that can store data and programs. It is also known as external memory or additional memory or backup memory or auxiliary memory. Such storage devices are capable of storing high-volume data. Storage devices can be hard drives, USB flash drives, CDs, etc.

    Primary Memory Secondary Memory
    Data can be directly accessed by the processing unit. Firstly, data is transferred to primary memory and after then routed to the processing unit.
    It can be both volatile and non-volatile in nature. It is non-volatile in nature.
    It is more costly than secondary memory. It is more cost-effective or less costly than primary memory.
    It is temporary because data is stored temporarily. It is permanent because data is stored permanently.
    In this memory, data can be lost whenever there is a power failure. In this memory, data is stored permanently and therefore cannot be lost even in case of power failure.
    It is much faster than secondary memory and saves data that is currently used by the computer. It is slower as compared to primary memory and saves different kinds of data in different formats.
    It can be accessed by data. It can be accessed by I/O channels.

    83 What is difference between process and thread?

    Process: It is basically a program that is currently under execution by one or more threads. It is a very important part of the modern-day OS.

    Thread: It is a path of execution that is composed of the program counter, thread id, stack, and set of registers within the process.

    Process Thread
    It is a computer program that is under execution. It is the component or entity of the process that is the smallest execution unit.
    These are heavy-weight operators. These are lightweight operators.
    It has its own memory space. It uses the memory of the process they belong to.
    It is more difficult to create a process as compared to creating a thread. It is easier to create a thread as compared to creating a process.
    It requires more resources as compared to thread. It requires fewer resources as compared to processes.
    It takes more time to create and terminate a process as compared to a thread. It takes less time to create and terminate a thread as compared to a process.
    It usually run-in separate memory space. It usually run-in shared memory space.
    It does not share data. It shares data with each other.
    It can be divided into multiple threads. It can’t be further subdivided.

    84 Write difference between micro kernel and monolithic kernel?

    MicroKernel: It is a minimal OS that executes only important functions of OS. It only contains a near-minimum number of features and functions that are required to implement OS. 
    Example: QNX, Mac OS X, K42, etc.

    Monolithic Kernel: It is an OS architecture that supports all basic features of computer components such as resource management, memory, file, etc. 
    Example: Solaris, DOS, OpenVMS, Linux, etc. 

    MicroKernel Monolithic Kernel
    In this software or program, kernel services and user services are present in different address spaces. In this software or program, kernel services and user services are usually present in the same address space.
    It is smaller in size as compared to the monolithic kernel. It is larger in size as compared to a microkernel.
    It is easily extendible as compared to a monolithic kernel. It is hard to as extend as compared to a microkernel.
    If a service crashes, it does affect on working of the microkernel. If a service crashes, the whole system crashes in a monolithic kernel.
    It uses message queues to achieve inter-process communication. It uses signals and sockets to achieve inter-process communication.

    85 What is Kernel and write its main functions?

    The kernel is basically a computer program usually considered as a central component or module of OS. It is responsible for handling, managing, and controlling all operations of computer systems and hardware. Whenever the system starts, the kernel is loaded first and remains in the main memory. It also acts as an interface between user applications and hardware.

    unctions of Kernel:

    • It is responsible for managing all computer resources such as CPU, memory, files, processes, etc.
    • It facilitates or initiates the interaction between components of hardware and software.
    • It manages RAM memory so that all running processes and programs can work effectively and efficiently.
    • It also controls and manages all primary tasks of the OS as well as manages access and use of various peripherals connected to the computer.
    • It schedules the work done by the CPU so that the work of each user is executed as efficiently as possible.

    86 What do you mean by Sockets in OS?

    The socket in OS is generally referred to as an endpoint for IPC (Interprocess Communication). Here, the endpoint is referred to as a combination of an IP address and port number.  Sockets are used to make it easy for software developers to create network-enabled programs. It also allows communication or exchange of information between two different processes on the same or different machines. It is mostly used in client-server-based systems. 

    Types of Sockets

    There are basically four types of sockets as given below:

    • Stream Sockets
    • Datagram Sockets
    • Sequenced Packet Sockets
    • Raw Sockets

    87 What do you mean by asymmetric clustering?

    Asymmetric Clustering is generally a system in which one of the nodes among all nodes is in hot standby mode whereas the rest of all nodes run different applications. It simply uses whole or entire hardware resources therefore it is considered a more reliable system as compared to others. 

    88 What is the main objective of multiprogramming?

    It refers to the ability to execute or perform more than one program on a single processor machine. This technique was introduced to overcome the problem of underutilization of CPU and main memory. In simple words, it is the coordination of execution of various programs simultaneously on a single processor (CPU). The main objective of multiprogramming is to have at least some processes running at all times. It simply improves the utilization of the CPU as it organizes many jobs where the CPU always has one to execute. 

    89 What is the difference between paging and segmentation?

    Paging: It is generally a memory management technique that allows OS to retrieve processes from secondary storage into main memory. It is a non-contiguous allocation technique that divides each process in the form of pages. 
    Segmentation: It is generally a memory management technique that divides processes into modules and parts of different sizes. These parts and modules are known as segments that can be allocated to process. 

    Paging Segmentation
    It is invisible to a programmer. It is visible to a programmer.
    In this, the size of pages is fixed. In this, the size of segments is not fixed.
    Procedures and data cannot be separated in paging. Procedures and data can be separated in segmentation.
    It allows a cumulative total of virtual address spaces to cross physical main memory. It allows all programs, data, and codes to break up into independent address spaces.
    It is mostly available on CPUs and MMU chips. It is mostly available on Windows servers that may support backward compatibility, while Linux has limited support.
    It is faster for memory access as compared to segmentation. It is slower as compared to paging.
    In this, OS needs to maintain a free frame. In this, OS needs to maintain a list of holes in the main memory.
    In paging, the type of fragmentation is internal. In segmentation, the type of fragmentation is external.
    The size of the page is determined by available memory. The size of the page is determined by the user.

    90 What do you mean by FCFS?

    FCFS (First Come First Serve) is a type of OS scheduling algorithm that executes processes in the same order in which processes arrive. In simple words, the process that arrives first will be executed first. It is non-preemptive in nature. FCFS scheduling may cause the problem of starvation if the burst time of the first process is the longest among all the jobs. Burst time here means the time that is required in milliseconds by the process for its execution. It is also considered the easiest and simplest OS scheduling algorithm as compared to others. Implementation of FCFS is generally managed with help of the FIFO (First In First Out) queue.