Misplaced Pages

VM/386

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

In computing , multitasking is the concurrent execution of multiple tasks (also known as processes ) over a certain period of time. New tasks can interrupt already started ones before they finish, instead of waiting for them to end. As a result, a computer executes segments of multiple tasks in an interleaved manner, while the tasks share common processing resources such as central processing units (CPUs) and main memory . Multitasking automatically interrupts the running program, saving its state (partial results, memory contents and computer register contents) and loading the saved state of another program and transferring control to it. This " context switch " may be initiated at fixed time intervals ( pre-emptive multitasking ), or the running program may be coded to signal to the supervisory software when it can be interrupted ( cooperative multitasking ).

#312687

63-466: VM/386 is a multitasking Multi-user environment or 'control program' that took early advantage of the capabilities of Intel 's 386 processor. By utilizing Virtual 8086 mode , users were able to run their existing text-based and graphical DOS software in safely separate environments. The system offered a high degree of control, with the ability to set memory limits, CPU usage and scheduling parameters, device assignments, and interrupt priorities through

126-435: A swap file or swap partition is a way for the operating system to provide more memory than is physically available by keeping portions of the primary memory in secondary storage . While multitasking and memory swapping are two completely unrelated techniques, they are very often used together, as swapping memory allows more tasks to be loaded at the same time. Typically, a multitasking system allows another process to run when

189-454: A thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler , which is typically a part of the operating system . In many cases, a thread is a component of a process . The multiple threads of a given process may be executed concurrently (via multithreading capabilities), sharing resources such as memory , while different processes do not share these resources. In particular,

252-483: A 1:1 model. FreeBSD 5 implemented M:N model. FreeBSD 6 supported both 1:1 and M:N, users could choose which one should be used with a given program using /etc/libmap.conf. Starting with FreeBSD 7, the 1:1 became the default. FreeBSD 8 no longer supports the M:N model. In computer programming , single-threading is the processing of one command at a time. In the formal analysis of the variables' semantics and process state,

315-461: A Program Distributor feeding up to twenty-five autonomous processing units with code and data, and allowing concurrent operation of multiple clusters. Another such computer was the LEO III , first released in 1961. During batch processing , several different programs were loaded in the computer memory, and the first one began to run. When the first program reached an instruction waiting for a peripheral,

378-465: A computer's memory, allowing the CPU to switch between them swiftly. This optimizes CPU utilization by keeping it engaged with the execution of tasks, particularly useful when one program is waiting for I/O operations to complete. The Bull Gamma 60 , initially designed in 1957 and first released in 1960, was the first computer designed with multiprogramming in mind. Its architecture featured a central memory and

441-452: A context switch. On multi-processor systems, the thread may instead poll the mutex in a spinlock . Both of these may sap performance and force processors in symmetric multiprocessing (SMP) systems to contend for the memory bus, especially if the granularity of the locking is too fine. Other synchronization APIs include condition variables , critical sections , semaphores , and monitors . A popular programming pattern involving threads

504-508: A cooperatively multitasked thread blocks by waiting on a resource or if it starves other threads by not yielding control of execution during intensive computation. Until the early 2000s, most desktop computers had only one single-core CPU, with no support for hardware threads , although threads were still used on such computers because switching between threads was generally still quicker than full-process context switches . In 2002, Intel added support for simultaneous multithreading to

567-523: A program will run in a timely manner. Indeed, the first program may very well run for hours without needing access to a peripheral. As there were no users waiting at an interactive terminal, this was no problem: users handed in a deck of punched cards to an operator, and came back a few hours later for printed results. Multiprogramming greatly reduced wait times when multiple batches were being processed. Early multitasking systems used applications that voluntarily ceded time to one another. This approach, which

630-431: A running fiber must explicitly " yield " to allow another fiber to run, which makes their implementation much easier than kernel or user threads . A fiber can be scheduled to run in any thread in the same process. This permits applications to gain performance improvements by managing scheduling themselves, instead of relying on the kernel scheduler (which may not be tuned for the application). Some research implementations of

693-466: A single processor might be shared between calculations of machine movement, communications, and user interface. Often multitasking operating systems include measures to change the priority of individual tasks, so that important jobs receive more processor time than those considered less significant. Depending on the operating system, a task might be as large as an entire application program, or might be made up of smaller threads that carry out portions of

SECTION 10

#1732854761313

756-472: A useful abstraction of concurrent execution. Multithreading can also be applied to one process to enable parallel execution on a multiprocessing system. Multithreading libraries tend to provide a function call to create a new thread, which takes a function as a parameter. A concurrent thread is then created which starts running the passed function and ends when the function returns. The thread libraries also offer data synchronization functions. Threads in

819-500: A variant to threads, named fibers , that are scheduled cooperatively. On operating systems that do not provide fibers, an application may implement its own fibers using repeated calls to worker functions. Fibers are even more lightweight than threads, and somewhat easier to program with, although they tend to lose some or all of the benefits of threads on machines with multiple processors . Some systems directly support multithreading in hardware . Essential to any multitasking system

882-537: A virtual machine manager menu. Unique CONFIG.SYS and AUTOEXEC.BAT files could be configured for each application, and even different DOS versions. In 1991 the vendor announced intentions to support DPMI 1.0 in VM/386. VM/386 had initially been developed by Softguard Systems , a producer of copy-protection software, with plans to include features like non-DOS system support, but financial constraints forced its sale to Intelligent Graphics Corporation (IGC), which launched

945-513: Is a "heavyweight" unit of kernel scheduling, as creating, destroying, and switching processes is relatively expensive. Processes own resources allocated by the operating system. Resources include memory (for both code and data), file handles , sockets, device handles, windows, and a process control block . Processes are isolated by process isolation , and do not share address spaces or file resources except through explicit methods such as inheriting file handles or shared memory segments, or mapping

1008-402: Is a "lightweight" unit of kernel scheduling. At least one kernel thread exists within each process. If multiple kernel threads exist within a process, then they share the same memory and file resources. Kernel threads are preemptively multitasked if the operating system's process scheduler is preemptive. Kernel threads do not own resources except for a stack , a copy of the registers including

1071-410: Is a common feature of computer operating systems since at least the 1960s. It allows more efficient use of the computer hardware; when a program is waiting for some external event such as a user input or an input/output transfer with a peripheral to complete, the central processor can still be used with another program. In a time-sharing system, multiple human operators use the same processor as if it

1134-477: Is initiated, a system call is made, and does not return until the I/O operation has been completed. In the intervening period, the entire process is "blocked" by the kernel and cannot run, which starves other user threads and fibers in the same process from executing. A common solution to this problem (used, in particular, by many green threads implementations) is providing an I/O API that implements an interface that blocks

1197-815: Is not so great a difference except in the cost of an address-space switch, which on some architectures (notably x86 ) results in a translation lookaside buffer (TLB) flush. Advantages and disadvantages of threads vs processes include: Operating systems schedule threads either preemptively or cooperatively . Multi-user operating systems generally favor preemptive multithreading for its finer-grained control over execution time via context switching . However, preemptive scheduling may context-switch threads at moments unanticipated by programmers, thus causing lock convoy , priority inversion , or other side-effects. In contrast, cooperative multithreading relies on threads to relinquish control of execution, thus ensuring that threads run to completion . This can cause problems if

1260-400: Is still used today on RISC OS systems. As a cooperatively multitasked system relies on each process regularly giving up time to other processes on the system, one poorly designed program can consume all of the CPU time for itself, either by performing extensive calculations or by busy waiting ; both would cause the whole system to hang . In a server environment, this is a hazard that makes

1323-405: Is that of thread pools where a set number of threads are created at startup that then wait for a task to be assigned. When a new task arrives, it wakes up, completes the task and goes back to waiting. This avoids the relatively expensive thread creation and destruction functions for every task performed and takes thread management out of the application developer's hand and leaves it to a library or

SECTION 20

#1732854761313

1386-400: Is to safely and effectively share access to system resources. Access to memory must be strictly managed to ensure that no process can inadvertently or deliberately read or write to memory locations outside the process's address space. This is done for the purpose of general system stability and data integrity, as well as data security. In general, memory access management is a responsibility of

1449-804: Is typically uniformly done preemptively or, less commonly, cooperatively. At the user level a process such as a runtime system can itself schedule multiple threads of execution. If these do not share data, as in Erlang, they are usually analogously called processes, while if they share data they are usually called (user) threads , particularly if preemptively scheduled. Cooperatively scheduled user threads are known as fibers ; different processes may schedule user threads differently. User threads may be executed by kernel threads in various ways (one-to-one, many-to-one, many-to-many). The term " light-weight process " variously refers to user threads or to kernel mechanisms for scheduling user threads onto kernel threads. A process

1512-403: Is unaware of them, so they are managed and scheduled in userspace. Some implementations base their user threads on top of several kernel threads, to benefit from multi-processor machines ( M:N model ). User threads as implemented by virtual machines are also called green threads . As user thread implementations are typically entirely in userspace, context switching between user threads within

1575-659: The Classic Mac OS . In 2001 Apple switched to the NeXTSTEP -influenced Mac OS X . A similar model is used in Windows 9x and the Windows NT family , where native 32-bit applications are multitasked preemptively. 64-bit editions of Windows, both for the x86-64 and Itanium architectures, no longer support legacy 16-bit applications, and thus provide preemptive multitasking for all supported applications. Another reason for multitasking

1638-519: The OpenMP parallel programming model implement their tasks through fibers. Closely related to fibers are coroutines , with the distinction being that coroutines are a language-level construct, while fibers are a system-level construct. Threads differ from traditional multitasking operating-system processes in several ways: Systems such as Windows NT and OS/2 are said to have cheap threads and expensive processes; in other operating systems there

1701-507: The Pentium ;4 processor, under the name hyper-threading ; in 2005, they introduced the dual-core Pentium D processor and AMD introduced the dual-core Athlon 64 X2 processor. Systems with a single processor generally implement multithreading by time slicing : the central processing unit (CPU) switches between different software threads . This context switching usually occurs frequently enough that users perceive

1764-474: The Sinclair QL followed in 1984, but it was not a big success. Commodore's Amiga was released the following year, offering a combination of multitasking and multimedia capabilities. Microsoft made preemptive multitasking a core feature of their flagship operating system in the early 1990s when developing Windows NT 3.1 and then Windows 95 . In 1988 Apple offered A/UX as a UNIX System V -based alternative to

1827-715: The program counter , and thread-local storage (if any), and are thus relatively cheap to create and destroy. Thread switching is also relatively cheap: it requires a context switch (saving and restoring registers and stack pointer), but does not change virtual memory and is thus cache-friendly (leaving TLB valid). The kernel can assign one or more software threads to each core in a CPU (it being able to assign itself multiple software threads depending on its support for multithreading), and can swap out threads that get blocked. However, kernel threads take much longer than user threads to be swapped. Threads are sometimes implemented in userspace libraries, thus called user threads . The kernel

1890-429: The CPU (" CPU bound "). In primitive systems, the software would often " poll ", or " busywait " while waiting for requested input (such as disk, keyboard or network input). During this time, the system was not performing useful work. With the advent of interrupts and preemptive multitasking, I/O bound processes could be "blocked", or put on hold, pending the arrival of the necessary data, allowing other processes to utilize

1953-555: The CPU. As the arrival of the requested data would generate an interrupt, blocked processes could be guaranteed a timely return to execution. Possibly the earliest preemptive multitasking OS available to home users was Microware 's OS-9 , available for computers based on the Motorola 6809 such as the TRS-80 Color Computer 2 , with the operating system supplied by Tandy as an upgrade for disk-equipped systems. Sinclair QDOS on

VM/386 - Misplaced Pages Continue

2016-399: The M:N implementation, the threading library is responsible for scheduling user threads on the available schedulable entities; this makes context switching of threads very fast, as it avoids system calls. However, this increases complexity and the likelihood of priority inversion , as well as suboptimal scheduling without extensive (and expensive) coordination between the userland scheduler and

2079-477: The OS/360 control system, of which Multiprogramming with a Variable Number of Tasks (MVT) was one. Saltzer (1966) credits Victor A. Vyssotsky with the term "thread". The use of threads in software applications became more common in the early 2000s as CPUs began to utilize multiple cores. Applications wishing to take advantage of multiple cores for performance advantages were required to employ concurrency to utilize

2142-542: The calling thread, rather than the entire process, by using non-blocking I/O internally, and scheduling another user thread or fiber while the I/O operation is in progress. Similar solutions can be provided for other blocking system calls. Alternatively, the program can be written to avoid the use of synchronous I/O or other blocking system calls (in particular, using non-blocking I/O, including lambda continuations and/or async/ await primitives ). Fibers are an even lighter unit of scheduling which are cooperatively scheduled :

2205-496: The context of this program was stored away, and the second program in memory was given a chance to run. The process continued until all programs finished running. The use of multiprogramming was enhanced by the arrival of virtual memory and virtual machine technology, which enabled individual programs to make use of memory and operating system resources as if other concurrently running programs were, for all practical purposes, nonexistent. Multiprogramming gives no guarantee that

2268-433: The early days of computing, CPU time was expensive, and peripherals were very slow. When the computer ran a program that needed access to a peripheral, the central processing unit (CPU) would have to stop executing program instructions while the peripheral processed the data. This was usually very inefficient. Multiprogramming is a computing technique that enables multiple programs to be concurrently loaded and executed into

2331-489: The entire environment unacceptably fragile. Preemptive multitasking allows the computer system to more reliably guarantee to each process a regular "slice" of operating time. It also allows the system to deal rapidly with important external events like incoming data, which might require the immediate attention of one or another process. Operating systems were developed to take advantage of these hardware capabilities and run multiple processes preemptively. Preemptive multitasking

2394-484: The idea that the most efficient way for cooperating processes to exchange data would be to share their entire memory space. Thus, threads are effectively processes that run in the same memory context and share other resources with their parent processes , such as open files. Threads are described as lightweight processes because switching between threads does not involve changing the memory context. While threads are scheduled preemptively, some operating systems provide

2457-424: The kernel has no knowledge of the application threads. With this approach, context switching can be done very quickly and, in addition, it can be implemented even on simple kernels which do not support threading. One of the major drawbacks, however, is that it cannot benefit from the hardware acceleration on multithreaded processors or multi-processor computers: there is never more than one thread being scheduled at

2520-421: The kernel scheduler. SunOS 4.x implemented light-weight processes or LWPs. NetBSD 2.x+, and DragonFly BSD implement LWPs as kernel threads (1:1 model). SunOS 5.2 through SunOS 5.8 as well as NetBSD 2 to NetBSD 4 implemented a two level model, multiplexing one or more user level threads on each kernel thread (M:N model). SunOS 5.9 and later, as well as NetBSD 5 eliminated user threads support, returning to

2583-438: The multiple cores. Scheduling can be done at the kernel level or user level, and multitasking can be done preemptively or cooperatively . This yields a variety of related concepts. At the kernel level, a process contains one or more kernel threads , which share the process's resources, such as memory and file handles – a process is a unit of resources, while a thread is a unit of scheduling and execution. Kernel scheduling

VM/386 - Misplaced Pages Continue

2646-468: The operating system kernel, in combination with hardware mechanisms that provide supporting functionalities, such as a memory management unit (MMU). If a process attempts to access a memory location outside its memory space, the MMU denies the request and signals the kernel to take appropriate actions; this usually results in forcibly terminating the offending process. Depending on the software and kernel design and

2709-486: The overall program. A processor intended for use with multitasking operating systems may include special hardware to securely support multiple tasks, such as memory protection , and protection rings that ensure the supervisory software cannot be damaged or subverted by user-mode program errors. The term "multitasking" has become an international term, as the same word is used in many other languages such as German, Italian, Dutch, Romanian, Czech, Danish and Norwegian. In

2772-490: The product in 1987. It won a PC Magazine award for technical excellence in 1988. The company also introduced a multi-user version, which allowed a number of serial terminals and even graphical systems to be connected to a single 386 computer. Current versions of the software have built on the multi-user support, and can handle tens of users in a networked environment with Windows 3.11 support, access controls, virtual memory and device sharing, among other features. A version of

2835-452: The requirements of the program's workload. However, the use of blocking system calls in user threads (as opposed to kernel threads) can be problematic. If a user thread or a fiber performs a system call that blocks, the other user threads and fibers in the process are unable to run until the system call returns. A typical example of this problem is when performing I/O: most programs are written to perform I/O synchronously. When an I/O operation

2898-518: The running process hits a point where it has to wait for some portion of memory to be reloaded from secondary storage. Processes that are entirely independent are not much trouble to program in a multitasking environment. Most of the complexity in multitasking systems comes from the need to share computer resources between tasks and to synchronize the operation of co-operating tasks. Various concurrent computing techniques are used to avoid potential problems caused by multiple tasks attempting to access

2961-540: The same file in a shared way – see interprocess communication . Creating or destroying a process is relatively expensive, as resources must be acquired or released. Processes are typically preemptively multitasked, and process switching is relatively expensive, beyond basic cost of context switching , due to issues such as cache flushing (in particular, process switching changes virtual memory addressing, causing invalidation and thus flushing of an untagged translation lookaside buffer (TLB), notably on x86). A kernel thread

3024-405: The same process is extremely efficient because it does not require any interaction with the kernel at all: a context switch can be performed by locally saving the CPU registers used by the currently executing user thread or fiber and then loading the registers required by the user thread or fiber to be executed. Since scheduling occurs in userspace, the scheduling policy can be more easily tailored to

3087-406: The same process share the same address space. This allows concurrently running code to couple tightly and conveniently exchange data without the overhead or complexity of an IPC . When shared between threads, however, even simple data structures become prone to race conditions if they require more than one CPU instruction to update: two threads may end up attempting to update the data structure at

3150-450: The same resource. Bigger systems were sometimes built with a central processor(s) and some number of I/O processors , a kind of asymmetric multiprocessing . Over the years, multitasking systems have been refined. Modern operating systems generally include detailed mechanisms for prioritizing processes, while symmetric multiprocessing has introduced new complexities and capabilities. Thread (computing) In computer science ,

3213-407: The same time and find it unexpectedly changing underfoot. Bugs caused by race conditions can be very difficult to reproduce and isolate. To prevent this, threading application programming interfaces (APIs) offer synchronization primitives such as mutexes to lock data structures against concurrent access. On uniprocessor systems, a thread running into a locked mutex must sleep and hence trigger

SECTION 50

#1732854761313

3276-621: The same time. For example: If one of the threads needs to execute an I/O request, the whole process is blocked and the threading advantage cannot be used. The GNU Portable Threads uses User-level threading, as does State Threads . M : N maps some M number of application threads onto some N number of kernel entities, or "virtual processors." This is a compromise between kernel-level ("1:1") and user-level (" N :1") threading. In general, " M : N " threading systems are more complex to implement than either kernel or user threads, because changes to both kernel and user-space code are required . In

3339-467: The software designed to cooperate with Unix was bundled with Everex Systems workstations. The system now sees use mainly in vertical applications like point-of-sale systems, where its ability to run reliably on cheap, reliable hardware outweigh any gains from newer operating systems that are more complex and less reliable. Early competition included DESQview 386 , Sunny Hill Software 's Omniview , StarPath Systems ' Vmos/3 , and Windows/386 2.01. As

3402-801: The specific error in question, the user may receive an access violation error message such as "segmentation fault". In a well designed and correctly implemented multitasking system, a given process can never directly access memory that belongs to another process. An exception to this rule is in the case of shared memory; for example, in the System V inter-process communication mechanism the kernel allocates memory to be mutually shared by multiple processes. Such features are often used by database management software such as PostgreSQL. Inadequate memory protection mechanisms, either due to flaws in their design or poor implementations, allow for security vulnerabilities that may be potentially exploited by malicious software. Use of

3465-519: The target market shifted away from single-user systems to multiple-user setups with many serial terminals it began to compete more directly with the likes of Multiuser DOS and PC-MOS/386 . Computer multitasking Multitasking does not require parallel execution of multiple tasks at exactly the same time; instead, it allows more than one task to advance over a given period of time. Even on multiprocessor computers, multitasking allows many more tasks to be run than there are CPUs. Multitasking

3528-502: The term single threading can be used differently to mean "backtracking within a single thread", which is common in the functional programming community. Multithreading is mainly found in multitasking operating systems. Multithreading is a widespread programming and execution model that allows multiple threads to exist within the context of one process. These threads share the process's resources, but are able to execute independently. The threaded programming model provides developers with

3591-479: The threads of a process share its executable code and the values of its dynamically allocated variables and non- thread-local global variables at any given time. The implementation of threads and processes differs between operating systems. Threads made an early appearance under the name of "tasks" in IBM's batch processing operating system, OS/360, in 1967. It provided users with three available configurations of

3654-510: The threads or tasks as running in parallel (for popular server/desktop operating systems, maximum time slice of a thread, when other threads are waiting, is often limited to 100–200ms). On a multiprocessor or multi-core system, multiple threads can execute in parallel , with every processor or core executing a separate thread simultaneously; on a processor or core with hardware threads , separate software threads can also be executed concurrently by separate hardware threads. Threads created by

3717-558: The user in a 1:1 correspondence with schedulable entities in the kernel are the simplest possible threading implementation. OS/2 and Win32 used this approach from the start, while on Linux the GNU C Library implements this approach (via the NPTL or older LinuxThreads ). This approach is also used by Solaris , NetBSD , FreeBSD , macOS , and iOS . An M :1 model implies that all application-level threads map to one kernel-level scheduled entity;

3780-405: Was dedicated to their use, while behind the scenes the computer is serving many users by multitasking their individual programs. In multiprogramming systems, a task runs until it must wait for an external event or until the operating system's scheduler forcibly swaps the running task out of the CPU. Real-time systems such as those designed to control industrial robots, require timely processing;

3843-469: Was eventually supported by many computer operating systems , is known today as cooperative multitasking. Although it is now rarely used in larger systems except for specific applications such as CICS or the JES2 subsystem, cooperative multitasking was once the only scheduling scheme employed by Microsoft Windows and classic Mac OS to enable multiple applications to run simultaneously. Cooperative multitasking

SECTION 60

#1732854761313

3906-680: Was implemented in the PDP-6 Monitor and Multics in 1964, in OS/360 MFT in 1967, and in Unix in 1969, and was available in some operating systems for computers as small as DEC's PDP-8; it is a core feature of all Unix-like operating systems, such as Linux , Solaris and BSD with its derivatives , as well as modern versions of Windows. At any specific time, processes can be grouped into two categories: those that are waiting for input or output (called " I/O bound "), and those that are fully utilizing

3969-725: Was in the design of real-time computing systems, where there are a number of possibly unrelated external activities needed to be controlled by a single processor system. In such systems a hierarchical interrupt system is coupled with process prioritization to ensure that key activities were given a greater share of available process time . As multitasking greatly improved the throughput of computers, programmers started to implement applications as sets of cooperating processes (e. g., one process gathering input data, one process processing input data, one process writing out results on disk). This, however, required some tools to allow processes to efficiently exchange data. Threads were born from

#312687