Wednesday 5 February 2020

User operating system interface.


User Interface(GUI)

  • A User Interface (UI) enables communication by serving as an interface between an application and its user. 
  • For effective communication, each program, including the operating system, comes with a different UI.
  • The two basic function of an application's user interface is to take the user inputs and deliver the user output. 
  • The types of inputs the UI takes and the types of output the UI provides may vary from application to application.
  • A user interface of any operating system can be classified into one of the following types:
  1. Graphical user interface (GUI).
  2. Command line user interface (CLI).

Graphical User Interface(GUI)

  • The graphical user interface is a type of GUI that allows users via point-and-click operations to interact with the operating system. GUI contains many icons representing variables such as a script, directory, and unit.
  • The graphical icon provided in the UI can be manipulated by using a suitable pointing device such as a mouse, trackball, touch screen and light pen. 
  • The other input devices like keyboard can also be used to manipulate these graphical icons. 
  • Users can manipulate the graphical icon provided in the UI using an appropriate pointing tool, such as a mouse, trackball, touch screen, and light pen. 
  • Such graphical symbols can also be controlled with other input devices such as the mouse.

Some advantages of GUI based operating system

  • The GUI interface is easy to understand and even the new users can operate on them on their own.
  • The GUI interface visually identifies and confirms any type of activity that users perform. 
  • For example, if the user deletes a file in the Windows operating system, the operating system demands clarification before deleting it.
  • The GUI interface enables the users to perform a number of tasks at the same time. This features of the operating system are also known as multitasking.

Command Line Interface(CLI)

  • Command line interface is a type of UI that allows users to communicate with the OS by issuing certain specific commands. 
  • You control what's happening in operating systems like DOS and Unix, and in many text-based or character mode programs by typing commands on a command line. 
  • The command line is simply the line where you type the commands on the keyboard. 
  • The only way to control an operating system or a program that uses such a command line interface is to type commands-you don't get menus, dialog boxes or buttons.
  • Both UNIX and MSDOS use interfaces on command line.
  • Command line user interfaces are extremely difficult for new users, because they do not usually list all available commands (which need to be memorized) and any misspelling of a command would prevent it from being executed, often resulting in a confusing error message. 
  • To perform a task within this interface, the user must type a command in the command line. After entering the key the user obtained a command from the command line interpreter.
  • The software program which is responsible for receiving and executing user-issued commands. 
  • The command line interpreter will again display the command prompt along with the output of the previous command provided by the user after the command is processed.
  • The drawback of the CLI is that to communicate with the operating system, the user needs to remember a great deal. 
  • Such types of interface are therefore not known to be very user friendly.

Operating System Services.

  • An Operating System is an interface between a user and computer hardware. An operating system is a software which performs all the basic tasks like file management, memory management, process management, input and output devices management, and much more.
  • Let's discuss about its services in details...


  • Memory management.
  • Process Management.
  • File Management.
  • Program Execution.
  • I/O device Management.
  • Resource allocation.
  • Secondary-Storage Management.
  • Network Management.
  • Error Detection.
  • Protection (User Authentication).

1. Memory Management

  • Memory management is the most important part of an operating system and manages both the primary (known as the main memory) and the secondary memory directly. 
  • Main memory provides the storage for a program which can be directly accessed by the CPU for its exertion. 
  • Therefore, the primary memory management function for a program to be executed is to load the program into main memory.
  • Memory management performs mainly two functions, these are..
  1. Keep track of which part of memory are currently being used and by whom.
  2. Decide which processes should be loaded into memory when the memory space is free.
  3. Allocate and de-allocate memory spaces as and when required.
  • The operating system loads the instructions into the main memory and then picks up those instructions and makes a queue to get CPU time to execute them. 
  • The memory manager monitors which memory locations are open, which are to be allocated or de-allocated.
  • It also makes decisions about which pages to swap between the main memory and the secondary memory. 
  • This operation is referred to as virtual memory management that increases each process's amount of memory available.

2. Process Management

  • In multiprocessing the operating system allows simultaneous running of more than one program (or process). 
  • Process management is a part of an operating system that manages the processes in such a way just to improve system performance.
  • The operating system deals with other types of activities also that includes user programs and system programs like as printer spooling virtual memory, swapping etc.
  • A process is an activity which needs certain resources to fulfill its task. Diverse machine resources include CPU power, main memory, and I / O tools. 
  • Those resources are allocated to the processes and are based on the decision that which process should be designated for resource allocation and this decision is taken by process management applying the algorithms for process scheduling.
  • It should be remembered that a procedure is not a system. A method is only ONE instant of an operating program. A lot of processes run the same program.
  • The five major activities of an operating system in regard to process management are:
  1. Creation and deletion of user and system processes.
  2. Suspension and re-activation of processes.
  3. Process synchronization.
  4. Process communication.
  5. Deadlock handling.

3. File Management

  • A file is a set of related information which its creator defines. Computer can store files on the disk (secondary storage) providing storage for the long term. 
  • Magnetic tape, magnetic disk, and optical disk are some examples of storage media. Each of these media has its own properties, such as speed, power, data transfer rate and method of access.
  • Typically a file system is organized into directories to make its use simple. These directories may include files, as well as other directories. 
  • Each file system consists of analogous directories and sub-directories. Microsoft separates its directories with a back slash and its file names are not case sensitive, while operating systems derived from Unix (including Linux) use the forward slash and its file
  • The main file management activities of an operating system are the creation and deletion of files / folders, file / folders manipulation support, file mapping to secondary storage and file backup.

4. Program Execution

  • The purpose of the computer system is to enable efficient execution of programs by users. The operating system provides an environment in which those programs can be run smoothly to the user. 
  • The user does not have to worry about the allocation or de-allocation of memory or anything else, because the operating system takes care of these things.
  • The program must first be loaded into the RAM to run a program, and then assign CPU time to execute it. 
  • This function is performed by operating system for user convenience. It also performs other significant tasks such as memory allocation and de-allocation, CPU scheduling etc.

5. I/O Device Management

  • Management of input / output device is part of an operating system that provides an environment for improved interaction between system and I / O devices (such as printers, tape drives for scanners, etc.). 
  • The operating system requires certain special programs known as the application driver to communicate efficiently with the I / O machines. 
  • A system driver is a specific type of computer software designed to allow hardware devices to communicate with it. 
  • It usually constitutes an interface for the connection of the hardware to the I / O device, via the actual machine bus or communication subsystem.

6. Resource Allocation

  • In the multitasking environment, when multiple jobs are running at a time, it is the responsibility of an operating system to allocate the required resources (like as CPU, main memory, tape drive or secondary storage etc.) to each process for its better utilization. For this purpose various types of algorithms are implemented such as process scheduling, CPU scheduling, disk scheduling etc.

7. Secondary Storage Management


  • A computer system has multiple storage levels including main storage, secondary storage, and cache storage. 
  • But it is not possible to use primary storage and cache storage as permanent storage because these are volatile memories and their data is lost when power is turned off. 
  • The main memory is too small to accommodate both data and programs. So the computer system has to provide secondary storage for the primary memory backup. Secondary storage involves video tapes, disk drives, and other files.
  • The secondary storage management provides an easy access to the file and folders placed on secondary storage using several disk scheduling algorithms.
  • The four major activities of an operating system in regard to secondary storage management are:
  1. To managing free space available on the secondary-storage device.
  2. Allocation of storage space.
  3. Scheduling the requests for memory access.
  4. Creation and deletion of files.

8. Network Management

  • When multiple computers are in a network or in a distributed architecture, an operating system functions as a network resource manager.
  • Processors communicate with each other through communication lines called network 
  • The design of the communication network must consider routing and network methods, as well as network and security issues.
  • Many of today's networks are focused on configuration of client-servers. 
  • A client is a program running on the local machine requesting the service from a server, while a server is a program running on the remote machine providing customer service by responding to their request.

9. Error Detection

  • Operating system also addresses problems with the hardware. The operating system continuously tracks the system to detect the errors and correct those errors to prevent hardware problems. 
  • The main function of the operating system is to detect the errors on hard disk such as bad sectors, memory overflow and I / O device-related errors. 
  • After the errors are detected, the operating system takes appropriate action to ensure consistent computing.
  • User programs can not handle this service of error detection and error correction, because it involves monitoring the entire process of computing. 
  • Such functions are too important to transfer to consumer systems. If given these privileges, a user program can interfere with the operation of the operating systems in question.

10. Protection (Authentication)

  • Protection is the most demanding feature of an operating system. 
  • Protection is an ability to authenticate the users for an illegal access of data as well as system.
  • Operating system provides various data and network security services through passwords, file permissions, and data encryption. 
  • Computers are generally connected via a network or internet connection, allowing users to share their files and access websites and transfer their files over the network. A high level of security is required for these cases.
  • There are various firewalls for the software at the operating system level. A firewall is setup to allow or deny traffic to a service that runs on top of the operating system. 
  • Therefore by installing the firewall one can deal with running the services, such as telnet or ftp, and not worry about Internet threats because the firewall will reject any traffic trying to connect to the service on that port.
  • If a computer system has multiple users and allows multiple processes to be executed simultaneously, then the different processes must be protected against each other's activities. 
  • Protection refers to mechanism for controlling access to the resources defined by a computer system by programs, processes, or users.



Monday 3 February 2020

Introduction to Operating Systems

What is an Operating System?

  • Operating system is a system software which is required in order to run application programs and utilities. It perform interaction between application programs and all the hardware of the computer. 
  • Examples of operating system are UNIX, MS-DOS, Windows-NT/2000, OS/2, MS-Windows - 98/XP/Vista, android and Mac OS etc.
www.oscspoint.blogspot.com
Different-2 Operating Systems

  • A computer system has many resources (hardware and software), which are required r to complete different-2 task. 
  • Input / output equipment, memory, file storage space, CPU etc. are the most frequently needed resources. 
  • The operating system serves as manager of the above-mentioned resources and assigns them to specific programs and users whenever appropriate to perform a specific function. 
  • Operating system work as resource manager i.e. it can manage the resource of a computer system internally or externally. The resources are processor, memory, files, and I/O devices. 
  • In simple terms, an operating system work as interface between the user and the hardware/machine.
  • An operating system or OS is a software program that allows the computer hardware to communicate with computer software and work with it. 
  • One computer and software programs would be useless without a computer operating system. The picture shows Microsoft's original packaging for Windows XP.
  • When computers were first introduced, the user used a command line interface to interact with them, which required commands. 
  • But today, almost every computer uses an operating system with a GUI (Graphical User Interface), which is much easier to use and operate.

Examples of Computer Systems

  • Microsoft Windows 10 - PC and IBM compatible operating system. Microsoft Windows is the most common and used operating system.
  • Apple macOS - Apple Mac operating system. Today, the only Apple computer operating system is macOS.
  • iOS - Operating system used with the Apple iPhone and iPads.
  • Chromium - Google operating system used with Chromebooks.
  • Oxygen OS - OnePlus' proprietary operating system.
  • Ubuntu Linux - A popular variant of Linux used with PC and IBM compatible computers.
  • Google Android - Operating system used with Android compatible phones and tablets.

Two views on Operating System


  1. User's View
  2. System View
User View 
  • These systems are designed to monopolize one user's energy, in order to maximize the work the user does. In these cases, the operating system is designed primarily for ease of use, with some attention being paid to performance, and none being paid to use of resources.
System View 
  • A computer system is made up of many resources such as-hardware and software-that have to be handled efficiently. The operating system serves as resource manager, determines between competing demands, manages program execution etc.

Operating System Management Tasks

  • Processor Management which involves putting the tasks into order and pairing them into manageable size before they go to the CPU.
  • Memory Management which coordinates data to and from RAM (random-access memory) and determines the necessity for virtual memory.
  • Device Management which provides interface between connected devices.
  • Storage Management which directs permanent data storage.
  • Application Management which allows standard communication between software and your computer.
  • User interface which allows you to communicate with your computer.

Saturday 28 April 2018

Study of Processor principle and cooling system in computer.


The Processor
The CPU is also one of the most expensive components on the motherboard. It is a very delicate device and sensitive to ESD, thus it should be handled with care. The processor itself is a flat plate of silicon made up of millions of transistors etched on to the silicon plate to form a huge computer logic circuit.
A ceramic cover is placed over the micro-circuit to protect it and to conduct heat away to the heat sink. This protective ceramic covering will have print information of the processor type, speed, and other details.
Processor Manufacturers
Though Intel is the best known company in processor manufacturing, we have a wide range of processors from other manufacturers such as:
·         Advanced Micro Devices (AMD)
·         VIA
·         Integrated Device Technology (IDT) - acquired by VIA
Each of these companies offer competitively priced processor chips with comparable performance to Intel processors. They also offer compatibility with Microsoft operating system software. In terms of processor technology advancement, these other manufacturers are also not left behind.
How the Processor Operates
The computer processor fetchesdecodes, and executes program instructions. A computer program consists of a series of steps called instructions which tell the computer what to do. Each instruction can be a basic arithmetic calculation or a logic operation. Before the program can be executed it is loaded into the working space (memory).
It is the job of the microprocessor, which is controlling the computer to fetch a program instruction from the memory, decode the instruction and then carry out any action that might be needed which is the execution process. It is the responsibility of the processor inside the computer to carry out the fetch-decode-execute cycle over and over again operating from the instructions it obtains from the main memory.
This fetch - decode - execute cycle is often referred to as the fetch - execute cycle.
The CPU uses a timing signal to be able to fetch and execute instructions. The timing signal is provided by the system clock. The clock speed is measured in Hz (cycles per second). In early processors speed was measured in Megahertz (MHz) is one million hertz (1 million cycles per second). Most of the computers we have today operate in the GHZ (Gigahertz) range. The clock speed varies from one computer processor to another.
Key Components Found Inside a Processor (CPU)

Arithmetic and Logic Unit (ALU)
This is the brain of the microprocessor. The ALU performs basic arithmetic calculations like adding, subtracting, multiplication and division of figures, it also performs logical operations like comparison of figures.

Control Unit (CU)

As the name suggests, this component controls all the functions that take place inside the processor itself. It instructs the ALU on which arithmetic and logical operation is to be performed. It acts under the direction of the system clock and sorts out all the internal data paths inside the processor to make sure that data gets from the right place and goes to the right place.

Register

Register also sometimes known as the accumulator, is a temporary storageposition where data coming from RAM heading to the processor for execution and data coming from the processor after processing is held. Thus a register is a local storage area within the processor that is used to hold data that is being worked on by the processor.

Internal Registers (Internal Data Bus)

This is the bus connecting the internal components of the processor to the motherboard. The size of the internal registers indicates how much information the processor can operate on at one time and how it moves data around internally within the chip.

External Data Path

This is the path (bus) used to fetch data from memory to the processor. In some cases the internal and external data buses are the same bit-size but in others, the external data bus can be either narrower or wider. The external data path is normally not as wide as the internal data path.

Address Lines

The address lines are used to specify the exact location in memory where data can be found. The standard PC is a binary device. Using the memory address bus, CPUs send out location information on their address lines (or control lines) and these address lines are routed to every other major component of the computer (memory, ROM, expansion bus etc).

Computer cooling systems, definition and functions

The phrase cooling in computing generally refers to the dissipation of large amounts of heat, which is created while a computer system is running. Heat is generated inside the computer tower by various hardware such as CPU,video card or even the hard drive. The objective of cooling is to maintain an optimal operating temperature and this can be achieved through various methods including the introduction of heat sinks and fans. Other cooling methods include liquid cooling and software cooling.

produced by computer components, to keep components within their safe operating temperature limits. Varied cooling methods are used to either achieve greater processor performance or to reduce the noise caused by cooling fans.
Most computers are equipped with as cheap cooling systems as possible: one or two noisy fans in a PC case, a processor is equipped with the standard cooling system. This approach has a right to life: you get sufficient, cheap, and very noisy cooling. How can you retain efficiency and reduce noise with heat conductor?
There is another extreme — complex technical solution: liquid (usually water) cooling, Freon cooling, special aluminum PC cases and advance computer cooling systems which dissipate heat with all its surfaces (they actually work as a heatsink). Such solutions are mandatory for some tasks: for example, for sound-recording studios, where computers must be absolutely noiseless



Study of joystick and Plotter.





What does Joystick mean?
A joystick is an input device that can be used for controlling the movement of the cursor or a pointer in a computer device. The pointer/cursor movement is controlled by maneuvering a lever on the joystick. The input device is mostly used for gaming applications and, sometimes, in graphics applications. A joystick also can be helpful as an input device for people with movement disabilities.
The joystick is mostly used when there is a need to perform a direct pointing or when a precise function is needed. There are different types of joysticks such as displacement joysticks, hand-operated joysticks, finger-operated joysticks, thumb/fingertip-operated joysticks, hand-operated isometric joysticks, etc.



Similar to the mouse in movement and usage, joysticks also include buttons, sometimes known as triggers. The difference between the mouse and the joystick is largely based on the fact that the cursor/pointer continues the movement in the direction of the joystick unless it is kept upright, whereas the mouse prevents the cursor from further movement until it is moved.

Advantage
One of the noticeable advantages of the joystick is its ability to provide fast interactions, which are much needed in gaming applications. The joystick provides a much-needed gaming experience, which is better in quality compared to that provided by other input devices. It has a simple design and is easy to learn and use. It is often inexpensive.

Complexity
The joystick, however, is not as easy to handle when selecting options from a screen and is not a preferred input device in such cases. Some joysticks limit the direction of movement to forward, left, right and backward, and do not offer diagonal or lateral movements. Again, the joystick is not as robust as other input devices, and, sometimes, users find it difficult to control compared to other input devices such as the mouse.

What does Plotter mean?
plotter is a computer hardware device much like a printer that is used for printing vector graphics. Instead of toner, plotters use a pen, pencil, marker, or another writing tool to draw multiple, continuous lines onto paper rather than a series of dots like a traditional printer. Though once widely used for computer-aided design, these devices have more or less been phased out by wide-format printers. Plotters are used to produce a hard copy of schematics and other similar applications.

Advantages of plotters

·         Plotters can work on very large sheets of paper while maintaining high resolution.
·         They can print on a wide variety of flat materials including plywood, aluminum, sheet steel, cardboard, and plastic.
·         Plotters allow the same pattern to be drawn thousands of times without any image degradation.

Disadvantages of plotters

·         Plotters are quite large when compared to a traditional printer.
·         Plotters are also much more expensive than a traditional printer.

Saturday 24 February 2018

System Calls

To understand system calls, first one needs to understand the difference between kernel mode and user mode of a CPU. Every modern operating system supports these two modes.

Kernel Mode

  • When CPU is in kernel mode, the code being executed can access any memory address and any hardware resource.
  • Hence kernel mode is a very privileged and powerful mode.
  • If a program crashes in kernel mode, the entire system will be halted.

User Mode

  • When CPU is in user mode, the programs don't have direct access to memory and hardware resources.
  • In user mode, if any program crashes, only that particular program is halted.
  • That means the system will be in a safe state even if a program in user mode crashes.
  • Hence, most programs in an OS run in user mode.

System Call

When a program in user mode requires access to RAM or a hardware resource, it must ask the kernel to provide access to that resource. This is done via something called a system call.
When a program makes a system call, the mode is switched from user mode to kernel mode. This is called a context switch.
Then the kernel provides the resource which the program requested. After that, another context switch happens which results in change of mode from kernel mode back to user mode.
Generally, system calls are made by the user level programs in the following situations:
  • Creating, opening, closing and deleting files in the file system.
  • Creating and managing new processes.
  • Creating a connection in the network, sending and receiving packets.
  • Requesting access to a hardware device, like a mouse or a printer.
In a typical UNIX system, there are around 300 system calls. Some of them which are important ones in this context, are described below.

Fork()

The fork() system call is used to create processes. When a process (a program in execution) makes a fork() call, an exact copy of the process is created. Now there are two processes, one being the parent process and the other being the child process.
The process which called the fork() call is the parent process and the process which is created newly is called the child process. The child process will be exactly the same as the parent. Note that the process state of the parent i.e., the address space, variables, open files etc. is copied into the child process. This means that the parent and child processes have identical but physically different address spaces. The change of values in parent process doesn't affect the child and vice versa is true too.
Both processes start execution from the next line of code i.e., the line after the fork() call. Let's look at an example:
//example.c
#include <stdio.h>
void main() {
   int val;  
   val = fork();  // line A
   printf("%d",val);  // line B
}
When the above example code is executed, when line A is executed, a child process is created. Now both processes start execution from line B. To differentiate between the child process and the parent process, we need to look at the value returned by the fork() call.
The difference is that, in the parent process, fork() returns a value which represents the process ID of the child process. But in the child process, fork() returns the value 0.
This means that according to the above program, the output of parent process will be the process IDof the child process and the output of the child process will be 0.

Exec()

The exec() system call is also used to create processes. But there is one big difference between fork() and exec() calls. The fork() call creates a new process while preserving the parent process. But, an exec() call replaces the address space, text segment, data segment etc. of the current process with the new process.
It means, after an exec() call, only the new process exists. The process which made the system call, wouldn't exist.
There are many flavors of exec() in UNIX, one being exec1() which is shown below as an example:
//example2.c
#include 
void main() {
   execl("/bin/ls", "ls", 0); // line A
   printf("This text won't be printed unless an error occurs in exec().");
} 
As shown above, the first parameter to the execl() function is the address of the program which needs to be executed, in this case, the address of the ls utility in UNIX. Then it is followed by the name of the program which is ls in this case and followed by optional arguments. Then the list should be terminated by a NULL pointer (0).
When the above example is executed, at line A, the ls program is called and executed and the current process is halted. Hence the printf() function is never called since the process has already been halted. The only exception to this is that, if the execl() function causes an error, then the printf()function is executed.

Secondary Storage Structure

Secondary storage devices are those devices whose memory is non volatile, meaning, the stored data will be intact even if the system is turned off. Here are a few things worth noting about secondary storage.
  • Secondary storage is also called auxiliary storage.
  • Secondary storage is less expensive when compared to primary memory like RAMs.
  • The speed of the secondary storage is also lesser than that of primary storage.
  • Hence, the data which is less frequently accessed is kept in the secondary storage.
  • A few examples are magnetic disks, magnetic tapes, removable thumb drives etc.

Magnetic Disk Structure

In modern computers, most of the secondary storage is in the form of magnetic disks. Hence, knowing the structure of a magnetic disk is necessary to understand how the data in the disk is accessed by the computer.
A magnetic disk contains several platters. Each platter is divided into circular shaped tracks. The length of the tracks near the centre is less than the length of the tracks farther from the centre. Each track is further divided into sectors, as shown in the figure.
Tracks of the same distance from centre form a cylinder. A read-write head is used to read data from a sector of the magnetic disk.
The speed of the disk is measured as two parts:
  • Transfer rate: This is the rate at which the data moves from disk to the computer.
  • Random access time: It is the sum of the seek time and rotational latency.
Seek time is the time taken by the arm to move to the required track. Rotational latency is defined as the time taken by the arm to reach the required sector in the track.
Even though the disk is arranged as sectors and tracks physically, the data is logically arranged and addressed as an array of blocks of fixed size. The size of a block can be 512 or 1024 bytes. Each logical block is mapped with a sector on the disk, sequentially. In this way, each sector in the disk will have a logical address.

Disk Scheduling Algorithms

On a typical multiprogramming system, there will usually be multiple disk access requests at any point of time. So those requests must be scheduled to achieve good efficiency. Disk scheduling is similar to process scheduling. Some of the disk scheduling algorithms are described below.

First Come First Serve:

This algorithm performs requests in the same order asked by the system. Let's take an example where the queue has the following requests with cylinder numbers as follows:
98, 183, 37, 122, 14, 124, 65, 67
Assume the head is initially at cylinder 56. The head moves in the given order in the queue i.e., 56→98→183→...→67.

Shortest Seek Time First (SSTF):

Here the position which is closest to the current head position is chosen first. Consider the previous example where disk queue looks like,
98, 183, 37, 122, 14, 124, 65, 67
Assume the head is initially at cylinder 56. The next closest cylinder to 56 is 65, and then the next nearest one is 67, then 3714, so on.

SCAN algorithm:

This algorithm is also called the elevator algorithm because of it's behavior. Here, first the head moves in a direction (say backward) and covers all the requests in the path. Then it moves in the opposite direction and covers the remaining requests in the path. This behavior is similar to that of an elevator. Let's take the previous example,
98, 183, 37, 122, 14, 124, 65, 67
Assume the head is initially at cylinder 56. The head moves in backward direction and accesses 37 and 14. Then it goes in the opposite direction and accesses the cylinders as they come in the path.