The kernel will generally provide shared access to hardware making use of various device drivers to provide a consistent abstract interface to the hardware. For example, memory is managed by the kernel, and applications ask the kernel for memory as needed. The kernel may provide sophisticated memory management, memory protection, and seperate address spaces for different tasks. The kernel also reclaims memory when a task ends. Tasks and threads are managed by the kernel as well. The kernel takes care of sharing the CPU resources through scheduling of tasks, and may time-slice CPU-access to provide multi-tasking. Access to mass-storage is usually abstracted by means of providing one or more file-systems (although not always, as some systems use a database metaphor, or some other means). The kernel also allows communication between tasks by means of InterProcessCommunication?. This is often extended to networking between machines as well. Multi-user features are provided by some operating systems via security measures that allow tasks to know what user is running which tasks.
Many operating systems provide a GraphicalUserInterface?. This abstracts the display screen, usually allowing multiple applications to share the display through the use of overlapping windows. The GraphicalUserInterface? also provides drivers for a pointing device and some sort of widget system for presenting user interface objects such as buttons and icons and scrollers to the user. Graphical Operating systems usually provide a graphical application launching shell, or desktop, for program execution.
The core of a computer's software. It's job is starting processes, managing hardware and keeping the system going. The current standard is multiprocess, multitasking and multiuser, but the PalmOS doesn't do multiuser or multitasking and works very well.
As time goes by, there are things that go with a computing system that some think are part of the OperatingSystem. Apple put a WindowingSystem? into the OperatingSystem with the original MacOs, and MicroSoft pretty much followed suit with Windows, then raised the stakes by adding InternetExplorer to the OperatingSystem with Windows95. Linux countered by putting a web server into the kernel for testing, but it is generally considered a silly and insecure thing to do. Apple countered by splitting MacOsx into separate UnixOs? (one with a microkernel) and WindowingSystem?.
People add things to the kernel of the operating system until it gets rediculous, then they recognize the value of MicroKernel? architecture. ;->
The Linux kernel isn't particularly bloated. Not many kernels are actually. Don't mistake an operating system as being a huge kernel. Not a whole lot is in the Linux kernel that it doesn't need. However, for speed reasons, alot of system services and device drivers and such are IN the kernel. A microkernel does these outside the kernel. Considering that you need these servcies anyway, putting them inside the kernel doesn't cause any additional bloat, you just get a quicker interface to the structures these portions of the OS need to access. It is not universally accepted that a microkernel is as efficient as a monolithic kernel, and not necessarily smaller, although most microkernels are smaller. Technically, Mac OS X is contains a microkernel (Mach). Mach can do nothing without Darwin (the BSD layer), so its not much smaller than a regular BSD. In total, OS X may be one of the most resource intensive operating systems around, and not particularly fast or smooth. So, having a microkernel didn't help it much, if at all.
See QNX [1] for an example of an operating system that goes really minimal. I would argue that QNX has shown that an operating system really only need perform the following tasks:
In this scheme all other tasks, including hardware support, are in the realm of processes. -- anon.
The above makes it sound as though a microkernel architecture is universally heralded as better. It isn't. Quite a few people think it costs too much in terms of performance (remember: Moore's law translates to "you can take twice as long to do something, but only if you don't want to do anything more than you were doing 2 years ago with the old version").
More interesting to me than the operating system kernel is the set of API's that developers of "normal" applications can use. This gets muddied by the addition of shared libraries - but even then, someone eventually chooses a set of "standard" libraries. Microsoft has theirs, traditional UNIX had the libc (and possibly Motif and xlib?). Linux seems to be getting it's libraries through a very interesting darwinian evolution (e.g. I hear tell that KDE will switch to Orbit, and Gnome to libarts instead of esd). Because many different libraries that do the same thing can co-exist easily on a given computer, it's quite possible to end up with 2 or 3 libraries that all do the same thing - and only when one offers clear advantages over the other will developers switch from using the one they are already familiar with. An "advantage" can be anything from one library shipping by default with a popular distribution, to superior source code or better documentation. --ErikDeBill
An OS that fits on a single floppy: http://www.menuetos.org/ now there's something you've not seen in a while!