Important Operating System Interview Questions
What is MUTEX ?
ANS: Mutex is a program object that allows multiple program threads to share the same resource, such as file access, but not simultaneously. When a program is started a mutex is created with a unique name. After this stage, any thread that needs the resource must lock the mutex from other threads while it is using the resource. the mutex is set to unlock when the data is no longer needed or the routine is finished.
What is the difference between a ‘thread’ and a ‘process’?
ANS:
A single process can have multiple threads that share global data and address space with other threads running in the same process, and therefore can operate on the same data set easily. Processes do not share address space and a different mechanism must be used if they are to share data.
If we consider running a word processing program to be a process, then the auto-save and spell check features that occur in the background are different threads of that process which are all operating on the same data set (your document).
process
In computing, a process is an instance of a computer program that is being sequentially executed[1] by a computer system that has the ability to run several computer programs concurrently.
Thread
A single process may contain several executable programs (threads) that work together as a coherent whole. One thread might, for example, handle error signals, another might send a message about the error to the user, while a third thread is executing the actual task of the…
What is INODE?
ANS:
inode is a data structure on a traditional Unix-style file system such as UFS. An inode stores all the information about a regular file, directory, or other file system object, except its data and name.
Explain the working of Virtual Memory.
ANS:
Virtual memory is a memory management technique developed for multitasking kernels. This technique virtualizes a computer architecture’s various hardware memory devices (such as RAM modules and disk storage drives), allowing a program to be designed as though:
there is only one hardware memory device and this “virtual” device acts like a RAM module.
the program has, by default, sole access to this virtual RAM module as the basis for a contiguous working memory (an address space).
Systems that employ virtual memory:
use hardware memory more efficiently than systems without virtual memory. make the programming of applications easier by: hiding fragmentation.
delegating to the kernel the burden of managing the memory hierarchy; there is no need for the program to handle overlays explicitly.
obviating the need to relocate program code or to access memory with relative addressing.
How does Windows NT supports Multitasking?
Explain the Unix Kernel.
ANS:
A Unix kernel — the core or key components of the operating system — consists of many kernel subsystems like process management, memory management, file management, device management and network management.
Each of the subsystems has some features:
Concurrency: As Unix is a multiprocessing OS, many processes run concurrently to improve the performance of the system.
Virtual memory (VM): Memory management subsystem implements the virtual memory concept and a user need not worry about the executable program size and the RAM size.
Paging: It is a technique to minimize the internal as well as the external fragmentation in the physical memory.
Virtual file system (VFS): A VFS is a file system used to help the user to hide the different file systems complexities. A user can use the same standard file system related calls to access different file systems.
The kernel provides these and other basic services: interrupt and trap handling, separation between user and system space, system calls, scheduling, timer and clock handling, file descriptor management.
What is Concurrency? Expain with example Deadlock and Starvation.
Ans:
Concurrency is a property of systems in which several computations are executing simultaneously, and potentially interacting with each other. The computations may be executing on multiple cores in the same chip, preemptively time-shared threads on the same processor, or executed on physically separated processors. A number of mathematical models have been developed for general concurrent computation including Petri nets, process calculi, the Parallel Random Access Machine model and the Actor model.
Deadlock:
Two processes are said to be in deadlock situation if process A holding onto resources required for process B and where as B holding onto the resources required for process A.
1. Mutual exclusion. Only one process may use the shared resource at a time.
2. Hold and wait. More than one resource is involved and they are to be obtained one by one. That is processes may hold allocated resources while awaiting assignment of others.
3. No preemption. Once a resource is held by a process, it cannot be forcibly removed from the process.
4. Circular wait. A closed chain of processes exists, such that each process holds at least one resource needed by the next process in the chain. In terms of graph theory, there is a directed circuit, in which processes and resources alternate exactly one by one,
Starvation:
This is mostly happens in time sharing systems in which the process which requires less time slot is waiting for the large process to finish and to release the resources but the large process holding the resources for long time (almost for forever) and the process that requires small time slot goes on waiting. Such situation is starvation for small process.
What are your solution strategies for “Dining Philosophers Problem” ?
dining philosophers problem is an illustrative example of a common computing problem in concurrency. It is a classic multi-process synchronization problem.
Solution:
Conductor solution:
A relatively simple solution is achieved by introducing a waiter at the table. Philosophers must ask his permission before taking up any forks. Because the waiter is aware of which forks are in use, he is able to arbitrate and prevent deadlock. When four of the forks are in use, the next philosopher to request one has to wait for the waiter’s permission, which is not given until a fork has been released. The logic is kept simple by specifying that philosophers always seek to pick up their left hand fork before their right hand fork (or vice versa).
To illustrate how this works, consider that the philosophers are labelled clockwise from A to E. If A and C are eating, four forks are in use. B sits between A and C so has neither fork available, whereas D and E have one unused fork between them. Suppose D wants to eat. Were he to take up the fifth fork, deadlock becomes likely. If instead he asks the waiter and is told to wait, we can be sure that next time two forks are released there will certainly be at least one philosopher who could successfully request a pair of forks. Therefore deadlock cannot happen.
Resource hierarchy solution:
Another simple solution is achieved by assigning a partial order to the resources (the forks, in this case), and establishing the convention that all resources will be requested in order, and released in reverse order, and that no two resources unrelated by order will ever be used by a single unit of work at the same time. Here, the resources (forks) will be numbered 1 through 5, in some order, and each unit of work (philosopher) will always pick up the lower-numbered fork first, and then the higher-numbered fork, from among the two forks he plans to use. Then, he will always put down the higher numbered fork first, followed by the lower numbered fork. In this case, if four of the five philosophers simultaneously pick up their lower-numbered fork, only the highest numbered fork will remain on the table, so the fifth philosopher will not be able to pick up any fork. Moreover, only one philosopher will have access to that highest-numbered fork, so he will be able to eat using two forks. When he finishes using the forks, he will put down the highest-numbered fork first, followed by the lower-numbered fork, freeing another philosopher to grab the latter and begin eating.
This solution to the problem is the one originally proposed by Dijkstra.
While the resource hierarchy solution avoids deadlocks, it is not always practical, especially when the list of required resources is not completely known in advance. For example, if a unit of work holds resources 3 and 5 and then determines it needs resource 2, it must release 5, then 3 before acquiring 2, and then it must re-acquire 3 and 5 in that order. Computer programs that access large numbers of database records would not run efficiently if they were required to release all higher-numbered records before accessing a new record, making the method impractical for that purpose.
Monitor solution:
The example below shows a solution where the forks are not represented explicitly. Philosophers can eat if neither of their neighbors are eating. This is comparable to a system where philosophers that cannot get the second fork must put down the first fork before they try again.
In the absence of locks associated with the forks, philosophers must ensure that the decision to begin eating is not based on stale information about the state of the neighbors. E.g. if philosopher B sees that A is not eating, then turns and looks at C, A could begin eating while B looks at C. This solution avoids this problem by using a single mutual exclusion lock. This lock is not associated with the forks but with the decision procedures that can change the states of the philosophers. This is ensured by the monitor. The procedures test, pickup and putdown are local to the monitor and share a mutual exclusion lock. Notice that philosophers wanting to eat do not hold a fork. When the monitor allows a philosopher who wants to eat to continue, the philosopher will reacquire the first fork before picking up the now available second fork. When done eating, the philosopher will signal to the monitor that both forks are now available.
Chandy / Misra solution:
and J. Misra proposed a different solution to the dining philosophers problem to allow for arbitrary agents (numbered P1, …, Pn) to contend for an arbitrary number of resources, unlike Dijkstra’s solution. It is also completely distributed and requires no central authority after initialization.
For every pair of philosophers contending for a resource, create a fork and give it to the philosopher with the lower ID. Each fork can either be dirty or clean. Initially, all forks are dirty.
When a philosopher wants to use a set of resources (i.e. eat), he must obtain the forks from his contending neighbors. For all such forks he does not have, he sends a request message.
When a philosopher with a fork receives a request message, he keeps the fork if it is clean, but gives it up when it is dirty. If he sends the fork over, he cleans the fork before doing so.
After a philosopher is done eating, all his forks become dirty. If another philosopher had previously requested one of the forks, he cleans the fork and sends it.
This solution also allows for a large degree of concurrency, and will solve an arbitrarily large problem.
Explain Memory Partitioning, Paging, Segmentation.
ANS:
Virtual memory is a computer system technique which gives an application program the impression that it has contiguous working memory, while in fact it is physically fragmented and may even overflow on to disk storage. Systems which use this technique make programming of large applications easier and use real physical memory (e.g. RAM) more efficiently than those without virtual memory.
Paging is the process of saving inactive virtual memory pages to disk and restoring them to real memory when required.
Some systems, such as the Burroughs large systems, do not use paging to implement virtual memory. Instead, they use segmentation, so that an application’s virtual address space is divided into variable-length segments. A virtual address consists of a segment number and an offset within the segment.
Explain Scheduling.
ANS:
Way by which the processes are alloated to CPU to use the CPU time is called scheduling.
there are different types of schedulings algorithms
1.FCFS(first come first serve)
2.SJF(shortest job first)
3. SHORTEST JOB NEXT
4.ROUND ROBIN
5. MULTILEVEL FEEDBACK SCHEDULING
Operating System Security.
ANS:
Operating systems provide the fundamental mechanisms for securing computer processing.
What is Semaphore?
ANS:
A semaphore is a protected variable or abstract data type that provides a simple but useful abstraction for controlling access by multiple processes to a common resource in a parallel programming environment.
A useful way to think of a semaphore is as a record of how many units of a particular resource are available, coupled with operations to safely (i.e. without race conditions) adjust that record as units are required or become free, and if necessary wait until a unit of the resource becomes available. Semaphores are a useful tool in the prevention of race conditions and deadlocks; however, their use is by no means a guarantee that a program is free from these problems. Semaphores which allow an arbitrary resource count are called counting semaphores, whilst semaphores which are restricted to the values 0 and 1 (or locked/unlocked, unavailable/available) are called binary semaphores.
Explain the following file systems : NTFS, Macintosh(HPFS), FAT.
ANS:
NTFS:
NTFS (New Technology File System)[1] is the standard file system of Windows NT, including its later versions Windows 2000, Windows XP, Windows Server 2003, Windows Server 2008, Windows Vista, and Windows 7
NTFS supersedes the FAT file system as the preferred file system for Microsoft’s Windows operating systems. NTFS has several improvements over FAT and HPFS (High Performance File System) such as improved support for metadata and the use of advanced data structures to improve performance, reliability, and disk space utilization, plus additional extensions such as security access control lists (ACL) and file system journaling.
FAT:
File Allocation Table (FAT) is a computer file system architecture now widely used on many computer systems and most memory cards, such as those used with digital cameras. FAT file systems are commonly found on floppy disks, flash memory cards, digital cameras, and many other portable devices because of their relative simplicity. The FAT file system is relatively straightforward technically and is supported by virtually all existing operating systems for personal computers. This makes it a useful format for solid-state memory cards and a convenient way to share data between operating systems.
What are the different process states?
ANS:
The following typical process states are possible on computer systems of all kinds.
Created: When a process is first created, it occupies the “created” or “new” state. In this state, the process awaits admission to the “ready” state. This admission will be approved or delayed by a long-term, or admission, scheduler
Ready or Waiting: A “ready” or “waiting” process has been loaded into main memory and is awaiting execution on a CPU (to be context switched onto the CPU by the dispatcher, or short-term scheduler). There may be many “ready” processes at any one point of the systems execution – for example, in a one processor system, only one process can be executing at any one time, and all other “concurrently executing” processes will be waiting for execution.
A ready queue or run queue is used in computer scheduling. Modern computers are capable of running many different programs or processes at the same time. However, the CPU is only capable of handling one process at a time. Processes that are ready for the CPU are kept in a queue for “ready” processes. Other processes that are waiting for an event to occur, such as loading information from a hard drive or waiting on an internet connection, are not in the ready queue. Note: Ready queue is created inside the main memory.
Running: A process moves into the running state when it is chosen for execution. The process’s instructions are executed by one of the CPUs (or cores) of the system. There is at most one running process per CPU or core.
Blocked: A process that is blocked on some event (such as I/O operation completion or a signal).
Terminated: A process may be terminated, either from the “running” state by completing its execution or by explicitly being killed. In either of these cases, the process moves to the “terminated” state. If a process is not removed from memory after entering this state, this state may also be called zombie.
Define and explain COM?
ANS:
COM (Component Object Model) technology in the Microsoft Windows-family of Operating Systems enables software components to communicate. COM is used by developers to create re-usable software components, link components together to build applications, and take advantage of Windows services.
What is Marshalling?
ANS:
marshalling is the arrangement of several coats of arms to form a single composition. In the military, marshalling is the gathering and ordering of military forces in preparation for battle.
In computer programming, marshalling is the process of gathering data from one or more applications or non- contiguous sources in computer storage, putting the data pieces into a message buffer, and organizing or converting the data into a format that is prescribed for a particular receiver or programming interface.
Marshalling is usually required when passing the output parameters of a program written in one language as input to a program written in another language.
Difference – Loading and Linking ?
ANS:
Loading: A Loader Brings an executable object into the memory and relocates it. Relocation is an object modification so that it can be loaded at address different than 0.
Linking: A Linker Combines several object modules into one, coherent module. This module may be either executable object, or library of functions.
It’s nice.thank u