Operating System - PowerPoint PPT Presentation

About This Presentation
Title:

Operating System

Description:

Operating System Allen C.-H. Wu Department of Computer Science Tsing Hua University Part I: Overview Ch. 1 Introduction Operating system: is a program that acts as an ... – PowerPoint PPT presentation

Number of Views:362
Avg rating:3.0/5.0
Slides: 221
Provided by: Allen131
Category:

less

Transcript and Presenter's Notes

Title: Operating System


1
Operating System
  • Allen C.-H. Wu
  • Department of Computer Science
  • Tsing Hua University

2
Part I OverviewCh. 1 Introduction
  • Operating system is a program that acts as an
    intermediary between a user and computer
    hardware. The goals are to make the computer
    system convenient to use and run in an efficient
    manner.
  • Why, what and how?
  • DOS, Window, UNIX, Linux
  • Single-user, multi-user

3
1.1 What Is an Operating System
  • OSgovernment resource allocationgt CPU, memory,
    IO, storage
  • OS a control program controls the execution of
    user programs to prevent errors and improper use
    of the computer.
  • Convenience for the user and efficient operation
    of the computer system

4
1.2 Mainframe Systems
  • Batch systems
  • Multiprogrammed systems
  • Time-sharing systems

5
Batch Systems
  • In early days (beyond PC era), computers were
    extremely expensive. Only few institutes can
    afford it.
  • The common IO devices include card readers, tape
    drives, and line printers.
  • To speed up processing, operators batched
    together jobs with similar needs and ran them
    through the computer as a group.
  • The OS is simple that needs to only automatically
    transfer control from one job to the next.

6
Batch Systems
  • Speed(CPU) gtgt speed(IO card readers) gt CPU is
    constantly idle.
  • After introduce disk technology, OS can keep all
    jobs on a disk instead of a serial card reader.
    OS can perform job scheduling (Ch. 6) to perform
    tasks more efficiently.

7
Multiprogrammed Systems
  • Multiprogramming OS keeps several jobs in the
    memory simultaneously. Interleaving CPU and IO
    operations between different jobs to maximize the
    CPU utilization.
  • Life examples a lawyer handles multiple cases
    for many clients.
  • Multiprogramming is the first instance where OS
    must make decisions for the users job scheduling
    and CPU scheduling.

8
Time-Sharing Systems
  • Time sharing or multitasking the CPU executes
    multiple jobs by switching among them, but
    switches are so quick and so frequently that the
    users can interact with each program while it is
    running (the user thinks that he/she is the only
    user).
  • A time-sharing OS uses CPU scheduling and
    multiprogramming to provide each user with a
    small portion of a time-shared computer.
  • Process a program is loaded into memory and
    executed.

9
Time-Sharing Systems
  • Need memory management and protection methods
    (Ch. 9)
  • Virtual memory (Ch. 10)
  • File systems (Ch. 11)
  • Disk management (Ch. 13)
  • CPU scheduling (Ch. 6)
  • Synchronization and communication (Ch. 7)

10
1.3 Desktop Systems
  • MS-DOS, Microsoft-Window, Linux, IBM OS/2,
    Macintosh OS
  • Mainframe (MULTICSMIT) gt minicomputers
    (DECVMS, Bell-LabUNIX) gt microcomputers gt
    network computers
  • Personal workstation a large PC (SUN, HP, IBM
    Windows NY, UNIX)
  • PCs are mainly single-user systems no resource
    sharing is needed due to the internet access,
    security and protection is needed
  • Worm or virus

11
1.4 Multiprocessor Systems
  • Multiprocessor systems tightly coupled systems
  • Why? 1) improve throughput, 2) money saving due
    to resources sharing (peripherals, storage, and
    power), and 3) increase reliability (graceful
    degradation, fault tolerant)
  • Symmetric multiprocessing each processor runs an
    identical OS, needs communication between
    processors
  • Asymmetric multiprocessing one master control
    processor, master-slave

12
Multiprocessor Systems
  • Back-ends
  • gt microprocessors become inexpensive
  • gt using additional microprocessors to off-load
    some OS functions (e.g., using a microprocessor
    system to control disk management)
  • a kind of master-salve multiprocessing

13
1.5 Distributed Systems
  • Network, TCP/IP, ATM protocols
  • Local-area network (LAN)
  • Wide-area network (WAN)
  • Metropolitan-area network (MAN)
  • Client-server systems (computer-server, file
    server)
  • Peer-to-peer systems (WWW)
  • Network operating systems

14
1.6 Clustered Systems
  • High availability one can monitor one or more of
    the others (over the LAN). If the monitored one
    fails, the monitoring machine will take ownership
    of its storage, and restart the applications that
    were running on the failed machine.
  • Asymmetric and symmetric modes

15
1.7 Real-Time Systems
  • There are rigid time requirements on the
    operation of a processor or control/data flow
  • Hard real-time systems the critical tasks must
    be guaranteed to be completed on time
  • Soft real-time systems a critical real-time task
    gets priority over other tasks

16
1.8 Handheld Systems
  • PDAs (personal digital assistants) - palm-Pilots
    and cellular phones.
  • Considerations small memory size, slow processor
    speed, and low power consumption.
  • Web clipping

17
19. Feature Migration
  • MULTICS (MULTIplexed Information and Computing
    Services) operating system MIT -gt GE645
  • UNIX Bell Lab -gt PDP11
  • Microsoft Windows NT, IBM OS/2, Macintosh OS

18
1.10 Computing Environments
  • Traditional computing network, firewalls
  • Web-based computing
  • Embedded computing

19
Ch. 2 Computer-System Structures
Disks
Printers
Tape drivers
Disk controller
Printer controller
Tape-drive controller
CPU
System bus
Memory controller
Memory
20
2.1 Computer-System Operation
  • Bootstrap program
  • Modern OSs are interrupt driven
  • Interrupt vector interrupted device address,
    interrupt request, and other info
  • System call (e.g., performing an I/O operation)
  • Trap

21
2.2 I/O Structure
  • SCSI (small computer-systems interface) can
    attach seven or more devices
  • Synchronous I/O I/O requested gt I/O started gt
    I/O completed gt returned control to user program
  • Asynchronous I/O I/O requested gt I/O started gt
    returned control to user program without waiting
    the completion of the I/O operation
  • Device-status table indicates the devices type,
    address, and state (busy, idle, not functioning)

22
I/O Structure
  • DMA (Direct Memory Access)
  • Data transfer for high-speed I/O devices and main
    memory
  • Block transfer with one interrupt (without CPU
    intervention 1 byte/word at a time)
  • Cycle-stealing
  • A back-end microprocessor?

23
2.3 Storage Structure
  • Main memory RAM (SRAM and DRAM)
  • von Neumann architecture instruction register
  • Memory-mapped I/O, programmed I/O (PIO)
  • Secondary memory
  • Magnetic disks, floppy disks
  • Magnetic tapes

24
2.4 Storage Hierarchy
(FIG)
  • Bridging speed gap
  • registersgtcachegtmain memorygtelectronic
    diskgtmagnetic diskgtoptical diskgtmagnetic tapes
  • Volatile storage data lost when power is off
  • Nonvolatile storage storage systems below
    electronic disk are nonvolatile
  • Cache small size but fast (cache management hit
    and miss)
  • Coherency and consistency

25
2.5 Hardware Protection
  • Resource sharing (multiprogramming) improves
    utilization but also increase problems
  • Many programming errors are detected by the
    hardware and reported to OS (e.g., memory fault)
  • Dual-mode operation user mode and monitor mode
    (also called supervisor, system or privileged
    mode privileged instructions) indicated by a
    mode bit.
  • Whenever a trap occurs, the hardware switches
    from user mode to monitor mode

26
Hardware Protection
  • I/O protection all I/O instructions should be
    privileged instructions. The user can only
    perform I/O operation through the OS.
  • Memory protection protect the OS from access by
    users program, protect user programs from each
    other base and limit registers.
  • CPU protection A timer to prevent a user program
    from getting stuck in an infinite loop.

27
2.6 Network Structure
  • LAN cover a small geographical area, twisted
    pair and fiber optic cabling, high speed,
    Ethernet.
  • WAN Arparnet (academia research) , router,
    modems.

28
CH. 3 OS Structure
  • Examining the services that an OS provides
  • Examining the interface between the OS and users
  • Disassembling the system into components and
    their interconnections
  • OS components
  • gt Process management
  • gt Main-memory management
  • gt File management
  • gt I/O-system management
  • gt Secondary-storage management
  • gt Networking
  • gt Protection system
  • gt Command-interpreter

29
3.1 System ComponentsProcess Management
  • Process a program in execution (e.g., a
    compiler, a word-processing program)
  • A process needs certain resources (e.g., CPU,
    memory, files and I/O devices) to complete its
    task. When the process terminates, the OS will
    reclaim any reusable resources.
  • OS processes and user processes The execution of
    each process must be sequential. All the
    processes can potentially execute concurrently,
    by multiplexing the CPU among them.

30
Process Management
  • The OS should perform the following tasks
  • Creating and deleting processes
  • Suspending and resuming processes
  • Providing mechanisms for process synchronization
  • Providing mechanisms for process communication
  • Providing mechanisms for deadlock handling
  • gt Ch. 4- Ch. 7

31
Main-Memory Management
  • Main memory is a repository of quickly accessible
    data shared by the CPU and I/O devices (Store
    data as well as program)
  • Using absolute address to access data in the main
    memory
  • Each memory-management scheme requires its own
    hardware support
  • The OS should responsible for the following
    tasks
  • gt Tracking what parts memory are currently used
    and by whom
  • gt Deciding which processes should be loaded
    into memory
  • gt Allocating and deallocating memory as needed

32
File Management
  • Different I/O devices have different
    characteristics (e.g., access speed, capacity,
    access method) - physical properties
  • File is a collection of related information
    defined by its creator. The OS provides a logical
    view of information storage (FILE) regardless its
    physical properties
  • Directories gt files (organizer) gt access right
    for multiple users

33
File Management
  • The OS should be responsible for
  • Creating and deleting files
  • Creating and deleting directories
  • Supporting primitives for manipulating files and
    directories
  • Mapping files onto secondary storage
  • Backing up files on nonvolatile storage
  • gt Ch. 11

34
I/O-System Management
  • An OS should hide the peculiarities of specific
    hardware devices from the user
  • The I/O subsystem consists of
  • A memory-management component including
    buffering, caching, and spooling
  • A general device-driver interface
  • Drivers for specific hardware devices

35
Secondary-Storage Management
  • Most modern computer systems use disks as the
    principle on-line storage medium, for both
    programs and data
  • Most programs stored on a disk and will be loaded
    into main memory whenever it is needed
  • The OS should responsible for
  • gt Free-space management
  • gt Storage allocation
  • gt Disk scheduling
  • gt Ch. 13

36
Networking
  • Distributed system a collection of independent
    processors that are connected through a
    communication network
  • FTP file transfer protocol
  • WWW NFS (network file system protocol)
  • http
  • gt Ch. 14- Ch. 17

37
Protection System
  • For a multi-user/multi-process system processes
    executions need to be protected
  • Any mechanisms for controlling the access of
    programs, data, and resources
  • Authorized and unauthorized access and usage

38
Command-Interpreter System
  • OS (kernel) ltgt command interpreter (shell) ltgt
    user
  • Control statements
  • A mouse-based window OS
  • Click an icon, depending on mouse points
    location, the OS can invoke a program, select a
    file or a directory (folder).

39
3.2 OS Services
  • Program execution
  • I/O operation
  • File-system manipulation
  • Communications
  • Error detection
  • Resource allocation
  • Accounting
  • Protection

40
3.3 System Calls
  • System calls the interface between a process and
    the OS
  • Mainly in assembly-language instructions.
  • Allow to be invoked from a higher-level language
    program (C, C for UNIX JAVAC/C)
  • EX. Copy one file to another how to use system
    calls to perform this task?
  • Three common ways to pass parameters to the OS
    register, block, stack (push/pop).

41
System Calls
  • Five major categories
  • Process control
  • File manipulation
  • Device manipulation
  • Information maintenance
  • Communications

42
Process Control
  • End, about
  • gtHalt the execution normally (end) or abnormally
    (abort)
  • gt Core dump file debugger
  • gtError level and possible recovery
  • Load, execute
  • gt When to load/execute? Where to return the
    control after its done?
  • Create/terminate process
  • gt When? (wait time/event)

43
Process Control
  • Get/set process attributes
  • gt Core dump file for debugging
  • gt A time profile of a program
  • Wait for time, event, single event
  • Allocate and free memory
  • The MS-DOS a single tasking system
  • Berkeley UNIX a multitasking system (using fork
    to start a new process

44
File Management
  • Create/delete file
  • Open, close
  • Read, write, reposition (e.g., to the end of the
    file)
  • Get/set file attributes

45
Device Management
  • Request/release device
  • Read, write, reposition
  • Get/set device attributes
  • Logically attach and detach devices

46
Information Maintenance
  • Get/set time or date
  • Get/set system data (e.g., OS version, free
    memory space)
  • Get/set process, file, or device attributes
    (e.g., current users and processes)

47
Communications
  • Create, delete communication connection
    message-passing and shared-memory model
  • Send, receive messages host name (IP name),
    process name
  • Daemons source (client)lt-gtconnectionlt-gtthe
    receiving daemon (server)
  • Transfer status information
  • Attach or detach remote devices

48
3.4 System Programs
  • OS a collection of system programs include file
    management, status information, file
    modification, programming-language support,
    program loading and execution, and
    communications.
  • Os is supplied with system utilities or
    application programs (e.g., web browsers,
    compiler, word-processors)
  • Command interpreter the most important system
    program
  • gt contains code to execute the command
  • gt UNIX command -gt to a file, load the file
    into memory and execute
  • rm G gt search the file rm gt load the file gt
    execute it with the parameter G

49
3.5 System Structure(Simple Structure)
FIG3.6
  • MS-DOS application programs are able to directly
    access the basic I/O routine (8088 has no dual
    mode and no hardware protection) gt errant
    programs may cause entire system crashes
  • UNIX the kernel and the system programs.
  • System calls define the application programmer
    interface (API) to UNIX

FIG3.7
50
Layered Approach
  • Layer 0 (the bottom one) the hardware, layer N
    (the top one) the user interface
  • The main advantage of the layer approach
    modularity
  • Pro simplify the design and implementation
  • Con not easy to appropriately define the layers
  • less efficient
  • Windows NT a highly layer-oriented organization
    gt lower performance compared to Windows 95 gt
    Windows NT 4.0 gt moving layers from user space
    to kernel space to improve the performance

51
Microkernels
  • Carnegie Mellon Univ (1980s) Mach
  • Idea removing all nonessential components from
    the kernel, and implementing them as system and
    user-level programs.
  • Main function microkernel provides a
    communication facility (message passing) between
    the client program and various services (running
    in user space)
  • Easy of extending the OS new services are added
    to the user space, no change on the kernel

52
Microkernels
  • Easy to port, more security and reliability (most
    services are running as user, if a service fails,
    the rest of OS remains ok)
  • Digital UNIX
  • Apple MacOS Server OS
  • Windows NT a hybrid structure

FIG 3.10
53
Virtual Machines
  • VM IBM
  • Each process is provided with a (virtual) copy of
    the underlying computer
  • Major difficulty disk systems gt minidisks
  • Implementation
  • Difficult to implement switch between a virtual
    user and a virtual monitor mode
  • Less efficient in run time

FIG 3.11
54
Virtual Machines
  • Benefits
  • The environment is complete protection of the
    various system resources (but no direct sharing
    of resources)
  • A perfect vehicle for OS research and development
  • No system-development time is needed system
    programmer can work on his/her own virtual
    machine to develop their system
  • MS-DOS (Intel) ltgt UNIX (SUN)
  • Apple Macintosh (68000) ltgt Mac (old 68000)
  • Java

55
Java
  • Java a technology rather than a programming
    language SUN late 1995
  • Three essential components
  • gt Programming-language specification
  • gt Application-programming interface (API)
  • gt Virtual-machine specification

56
Java
  • Programming language
  • Object-oriented, architecture-neutral,
    distributed and multithreaded programming
    language
  • Applets programs with limited resource access
    that run within a web browser
  • A secure language (running on distributed
    network)
  • Performing automate garbage collection

57
Java
  • API
  • Basic language support for graphics, I/O,
    utilities and networking
  • Extended language support for enterprise,
    commerce, security and media
  • Virtual machine
  • JVM a class loader and a Java interpreter
  • Just-in-time compiler turns the
    architecture-neutral bytecodes into native
    machine language for the host computer

58
Java
  • The Java platforms JVM and Java API gt make it
    possible to develop programs that are
    architecture neutral and portable
  • Java development environment a compile-time and
    a run-time environment

59
3.8 System Design and Implementation
  • Define the goals and specification
  • User goals (wish list) and system goals
    (implementation concerns)
  • The separation of policy (what should be done)
    and mechanism (how to do it)
  • Microkernel implementing a basic set of
    policy-free primitive building blocks
  • Traditionally, OS is implemented using assembly
    language (better performance but portable is the
    problem)

60
System Design and Implementation
  • High-level language implementation
  • Easy porting but slow speed with more storage
  • Need better data structures and algorithms
  • MULTICS (ALGOL) UNIX, OS/2, Windows (C)
  • Non critical (HLL), critical (assembly language)
  • System generation (SYSGEN) to create an OS for a
    particular machine configuration (e.g., CPU?
    Memory? Devices? Options?)

61
Part II Process ManagementCh. 4 Processes4.1
Process Concept
  • Process (job) is a program in execution
  • Ex. For a single-user system (PC), the user can
    run multiple processes (jobs), such as web,
    word-processor, and CD-player, simultaneously
  • Two processes may be associated with the same
    program. Ex. You can invoke an editor twice to
    edit two files (two processes) simultaneously

62
Process Concept
  • Process state
  • Each process may be in one of the 5 states new,
    running, waiting, ready, and terminated

interrupt
admitted
exit
Scheduler dispatch
IO or event wait
IO or event completion
63
Process Concept
FIG 4.2
  • Process Control Block (PCB) represents a process
  • Process state new, ready, running, waiting or
    exit
  • Program counter point to the next instruction to
    be executed for the process
  • CPU registers when an interrupt occurs, the data
    needs to be stored to allow the process to be
    continued correctly
  • CPU-scheduling information process priority
    (Ch.6)
  • Memory-management information the values of base
    and limit registers, the page tables...

64
Process Concept
  • Accounting information account number, process
    number, time limits
  • IO status information a list of IO devices
    allocated to the process, a list of open files.
  • Threads
  • Single thread a process is executed with one
    control/data flow
  • Multi-thread a process is executed with multiple
    control/data flow (e.g., running an editor, a
    process can execute type in and spelling check
    at the same time

FIG 4.3
65
4.2 Process Scheduling
  • The objective of multiprogramming maximize the
    CPU utilization (keep the CPU running all the
    time)
  • Scheduling queues
  • Ready queue (usually a linked list) the
    processes that are in the main memory and ready
    to be executed
  • Device queue the list of processes waiting for a
    particular IO device

FIG 4.4
66
Process Scheduling
  • Queuing diagram

Ready queue
CPU
IO
IO queue
IO request
Time slice expired
Fork a child
Child executes
Wait for an interrupt
Interrupt occurs
67
Process Scheduling
  • Scheduler
  • Long-term scheduler (job scheduler) selects
    process from a pool and loads them into main
    memory for execution (less frequent and has
    longer-time to make a more careful selection
    decision)
  • Short-term scheduler (CPU scheduler) selects
    among processes for execution (more frequent and
    must fast)
  • The long-term scheduler controls the degree of
    multiprogramming (the of processes in memory)

68
Process Scheduling
  • IO-bound process
  • CPU-bound process
  • if all processes are IO-bound gt ready queue
    always be empty gt short-term scheduler has
    nothing to do
  • if all processes are CPU-bound gt IO-waiting
    queue always be empty gt devices will be unused
  • Balance system performance a good mix of
    IO-bound and CPU-bound processes

69
Process Scheduling
FIG 4.6
  • The medium-term scheduler using swapping to
    improve the process mix
  • Context switching switching the CPU to a new
    process gt saving the state of the suspended
    process AND loading the saved state for the new
    process
  • Context switching time is pure overhead and
    heavily depended on hardware support

70
4.3 Operations on Processes
  • Process creation
  • A process may create several new processes
    parent process gt children processes (tree)
  • Subprocesses may obtain resources from their
    parent (it may overloading) or from the OS
  • When a process creates a new one, the execution
  • 1. The parent and the new one run concurrently
  • 2. The parent waits until all of its children
    have terminated

71
4.3 Operations on Processes
  • In terms of the address space of the new process
  • 1. The child process is a duplicate of the parent
    process
  • 2. The child process has a program loaded into it
  • In UNIX, each process has a process identifier.
    fork system call to create a new process (it
    consists of a copy of the address space of the
    original process) Advantage? Easy communication
    between the parent and children processes.

72
4.3 Operations on Processes
  • execlp system call (after fork) replace the
    process memory space with a new program

Pid fork() if (pidlt0) fork failed else if
(pid0) execlp(/bin/ls, ls,NULL)
--- overlay with UNIX ls else wait(NULL) --
wait for the child to complete
printf(Child Complete) exit(0)
73
4.3 Operations on Processes
  • Process termination
  • exit system call after terminating a process
  • Cascading termination when a process terminates,
    all its children must also be terminated

74
4.4 Cooperating Processes
  • Independent and cooperating processes
  • Any process shares data with other processes is a
    cooperating process
  • WHY needs process cooperation?
  • Information sharing
  • Computation speedup (e.g., parallel execution of
    CPU and IO)
  • Modularity dividing the system functions into
    separate processes

75
4.4 Cooperating Processes
  • Convenience for a single-user, many tasks can be
    executed at the same time
  • Producer-consumer
  • Unbounded/bounded-buffer
  • The shared buffer implemented as a circular array

76
4.5 Interprocess Communication (IPC)
  • Message-passing system
  • send and receive
  • Fixed or variable size of messages
  • Communication link
  • Direct/indirect communication
  • Symmetric/asymmetric communication
  • Automatic or explicit buffering
  • Send by copy or by reference
  • Fixed or variable-sized messages

77
4.5 Interprocess Communication (IPC)
  • Naming
  • Direct communication (two processes link)
  • symmetric in addressing send(p, message),
    receive(q, message) explicit name of the
    recipient and sender
  • asymmetric in addressing send(p, message),
    receive(id, message) variable id is set to the
    name
  • Disadvantage limited modularity of the process
    definition (all the old names need to be found
    before it can be modified not suitable for
    separate compilation)

78
4.5 Interprocess Communication (IPC)
  • Indirect communication
  • using mailboxes or ports
  • Supporting multi-processes link
  • Mailbox may be owned by the process (when process
    terminates, the mailbox disappears) or
  • If the mailbox is owned by the OS that must allow
    the process creates a new mailbox, send/receive
    message via the mailbox, and deletes the mailbox

79
4.5 Interprocess Communication (IPC)
  • Synchronization
  • Blocking/nonblocking send and receive
  • Blocking (asynchronous) nonblocking (synchronous)
  • A rendezvous between the sender and receiver when
    both are blocking
  • Buffering
  • Zero/bounded/unbounded capacity

80
Mach
  • Message based using ports
  • When a task is created two mailboxes, the Kernel
    (kernel communication) and the Notify
    (notification of event occurrences) ports are
    created
  • Three systems calls are needed for message
    transfer msg_send, msd_receive, and msg_rpc
    (Remote Procedure Call)
  • Mailbox initial empty queue FIFO order
  • Message fixed-length header, variable-length
    data

81
Mach
  • If the mailbox is full, the sender has 4 options
  • 1. Wait indefinitely until there is a free room
  • 2. Wait for N ms
  • 3. Do not wait, just return immediately
  • 4. Temporarily cache a message
  • The receiver must specify the mailbox or the
    mailbox set
  • The Mach was designed for distributed systems

82
Window NT
  • Employs modularity to increase functionality and
    decrease the implementation time for adding new
    features
  • NT supports multiple OS subsystems message
    passing (called local procedure-call facility
    (LPC))
  • Using ports for communications connection port
    (by client) and communication port (by server)
  • 3 types of message-passing techniques
  • 1. 256-byte queue
  • 2. Large message via shared memory
  • 3. Quick LPC (64k)

83
4.6 Communication in Client-Server Systems
  • Socket made up of an IP address concatenated
    with a port number
  • Remote procedure calls (RPC)

84
Ch. 5 Thread5.1 Overview
FIG 5.1
  • A lightweight process a basic unit of CPU
    utilization
  • A heavyweight process a single thread of control
  • Multithread is common practice ex. Web has 1
    thread on displaying text/image and another on
    retrieving data from the network
  • When a single application requires to perform
    several similar tasks (e.g., web server accepts
    many clients requests), using threads is more
    efficient than using processes.

85
Benefits
  • 4 main benefits
  • Responsiveness allowing a program to continue
    running even part of it is blocked or running a
    lengthy operation
  • Resource sharing memory and code
  • Economy allocating memory and resources for a
    process is more expensive (in Solaris, creating a
    process is 30 times slower, contex switching is 5
    times slower)
  • Utilization of multiprocessor architectures (for
    a single-processor, the thread is running one at
    a time

86
User and Kernel Threads
  • User thread
  • by a thread library at the user level that
    supports thread creation, scheduling and
    management with no kernels support
  • Advantage fast
  • Disadvantage if a kernel is single-threaded, any
    user-level thread -gt blocking system calls gt
    block the entire process
  • POSIX Pthreads, Mach C-threads, Solaris threads

87
User and Kernel Threads
  • Kernel threads
  • Supported by the OS
  • Its slower than user threads
  • If a thread performs a block system call, the
    kernel can schedule another thread in the
    application for execution
  • Window NT, Solaris, Digital UNIX

88
5.2 Multithreading Models
  • Many-to-one model many user-level to one kernel
  • only one user thread can access the kernel thread
    at one time gt cant run in parallel on
    multiprocessors
  • One-to-one model
  • More concurrency (allowing parallel execution)
  • Overhead one kernel process for one user process
  • Many-to-many
  • The of kernel threads gt specific for a
    particular application or machine
  • it doesnt suffer the drawbacks of the other two
    models

89
5.3 Treading Issues
  • The fork and exec system calls
  • Cancellation asynchronous and deferred
  • Signal handling default and user-defined
  • Thread pools
  • Thread-specific data

90
5.4 Pthreads
  • POSIX standard (IEEE 1003.1c) an API for thread
    creation and synchronization
  • A specification for thread behavior not an
    implementation

91
5.5 Solaris Threads
FIG 5.6
  • Till 1992 it only supports a single thread of
    control
  • Now, it supports kernel/user-level, symmetric
    multiprocessing, and real-time scheduling
  • Intermediate-level of threads user-level
    ltgtlightweight processes (LWP)ltgtkernel-level
  • Many-to-many model
  • User-level threads bounded (permanently attached
    to a LWP), unbounded (multiplexed onto the pool
    of available LWPs)

92
Solaris Threads
  • Each LWP is connected to one kernel-level thread,
    whereas each user-level thread is independent of
    the kernel

93
5.6-8 Other Threads
  • Window 2000
  • Linux
  • Java

94
Ch. 6 CPU Scheduling6.1 Basic Concepts
  • The objective of multiprogramming maximize the
    CPU utilization
  • Scheduling the center of OS
  • CPU-IO burst cycle IO-bound program-gtmany short
    CPU bursts, CPU-bound program-gtfew very long CPU
    bursts
  • CPU scheduler short-term scheduler
  • Queue FIFO, priority, tree or a linked list
  • Preemptive scheduling
  • CPU scheduling decisions depend on

95
Basic Concepts
  • 1. A process from running to waiting state
  • 2. A process from running to ready state
  • 3. A process from waiting to ready state
  • 4. A process terminates
  • 1 and 4 occur, a new process must be selected for
    execution but not necessary for 2 and 3
  • The scheduling scheme only for 1 and 4 is called
    nonpreemptive or cooperative (once the CPU is
    allocated to a process, the process keeps the CPU
    till it terminates or moves to the waiting state

96
Basic Concepts
  • The preemptive scheduling scheme needs to
    consider how to swap the process execution and
    maintain the correct execution (Context
    switching)
  • Dispatcher gives control of the CPU to a newly
    selected process
  • Switching context
  • Switching to user mode
  • Jump to proper location of the user program and
    start it
  • Dispatch latency the time between stop the old
    and start the new one

97
6.2 Scheduling Criteria
  • CPU utilization
  • Throughput the of processes completed/per
    unit-time
  • Turnaround time submission of a process to its
    completion
  • Waiting time the sum of the periods spend
    waiting in the ready queue
  • Response time interactive system (minimize
    variance of the response time is more important
    than minimize the average response time)

98
6.3 Scheduling Algorithms
  • Comparison the average waiting time
  • FCFS(first come first serve)
  • Convoy effect all other processes wait for one
    big process gets off the CPU
  • The FCFS scheduling algorithm is nonpreeemptive

99
Scheduling Algorithms
  • SJF(Shortest-job-first scheduling)
  • Provably optimal
  • Difficulty how to know the length of the next
    CPU burst???
  • Used frequently in long-term scheduling

100
Scheduling Algorithms
  • Predict exponential average
  • Preemptive SJF shortest-remaining-time-first

101
Scheduling Algorithms
  • Priority scheduling
  • Priorities can be defined internally (some
    measures in time or memory size) or externally
    (specify by the users)
  • Either preemptive or nonpreemptive
  • Problem starvation (low-priority process will
    never be executed)
  • Solution aging (increase priority over time)

102
Scheduling Algorithms
  • Round-robin (RR) scheduling
  • Suitable for time-sharing systems
  • Time quantum circular queue of processes
  • The average waiting time is often long
  • The RR scheduling algorithm is preemptive

103
Scheduling Algorithms
  • Performance gt size of the time quantumgt
    extremely large (FCFS) gt extremely small
    (processor sharing)
  • Rule of thumbgt 80 of CPU bursts should be
    shorter then the time quantum
  • Performance gt context switch effect gt time
    quantum gt time(context switching)
  • Turnaround time gt size of the time quantum

104
Scheduling Algorithms
FIG 6.6
  • Multilevel queue scheduling
  • Priority foreground (interactive) processes gt
    background (batch) processes
  • Partitions the ready queue into several separate
    queues
  • The processes are permanently assigned to a queue
    based on some properties of the process (e.g.,
    process type, memory size)
  • Each queue has its own scheduling algorithm
  • Scheduling between queues 1) fixed-priority
    preemptive scheduling, 2) time slices between
    queues

105
Scheduling Algorithms
  • Multilevel feedback-queue scheduling
  • Allow a process to move between queues
  • The idea is to separate processes with different
    CPU-burst characteristics (e.g., move the process
    using too much CPU to a lower-priority)
  • What are considerations for such decisions?

106
6.4 Multiple-Processor Scheduling
  • Homogeneous all processors are identical
  • Load sharing among processors
  • Symmetric multiprocessing (SMP) each processor
    is self-scheduling, it examines a common ready
    queue and select a process to execute (whatre
    the main concern?)
  • Asymmetric multiprocessing a master server is
    handling all scheduling decisions

107
6.5 Real-Time Scheduling
  • Hard real-time resource reservation (impossible
    using a secondary memory or virtual memory)
  • It requires a special-purpose software running on
    hardware dedicated to the critical process to
    satisfy the hard real-time constraints
  • Soft real-time guarantee critical processes
    having higher priorities
  • The system must have priority scheduling and the
    real-time processes must have the highest
    priority, and will not degrade with time
  • The dispatch latency must be short. HOW?

108
Real-Time Scheduling
  • Preemption points in long-duration system calls
  • Making the entire kernel preemptible
  • What if a high-priority process needs to
    read/modify kernel data which is currently used
    by a low-priority process? (Priority inversion)
  • Priority-inheritance protocol the processes that
    are accessing resources that the high-priority
    process needs will inherit the high-priority and
    continue running till they all complete

109
6.6 Algorithm Evaluation
  • Deterministic modeling analytic evaluation
    (given predetermined workloads and based on that
    to define the performance of each algorithm)
  • Queueing models limit theoretical analysis
  • Simulations random-number generator, it may be
    inaccurate due to assumed distribution (defined
    empirically or mathematically). Solution trace
    tapes (monitoring the real system)
  • Implementation most accurate but with high cost.

110
Ch. 7 Process Synchronization7.1 Background
  • Why?
  • Threads share a logical address space
  • Processes share data and codes
  • They have to wait in line till their turns
  • Race condition

111
7.2 Critical-Section Problem
  • Critical section a thread has a segment of code
    in which the thread may change the common data
  • A solution to the critical-section problem must
    satisfy
  • Mutual exclusion
  • Progress
  • Bounded waiting

112
Two-Tasks Solutions
Alg 1 using a turn
Whats the problem? What if turn0 and T0 is in
the non-critical section, T1 needs to enter the
critical section?
Progress requirement?
113
Two-Tasks Solutions
Alg 1 using a turn and yield()
Whats the problem? It does not retain
sufficient info about the state of each thread
(only the thread is allowed to enter the
CS). How to solve this problem?
114
Two-Tasks Solutions
Alg 2 using an array to replace turn
a0 a1-gt1 indicates that T1 is ready to enter
the CS
Is mutual exclusion satisfied? Yes Is progress
satisfied? No What if both T0 and T1 set
their flag a0 and a1 to 1 at the
same time? Loop forever!!!
115
Two-Tasks Solutions
Alg 3 satisfying the three requirements
116
7.3 Synchronization Hardware
  • Test-and-set indivisible instructions. If two
    Test-and-Set instructions are executed
    simultaneously, they will be executed
    sequentially in some arbitrary order (flag and
    turn)
  • Swap instruction (yield())

117
7.4 Semaphores
  • A general method to handle binary or
    multiple-parties synchronization
  • Two operations P test and V increment must be
    executed indivisibly
  • P(S) while Slt0 S--
  • V(S) S
  • Binary semaphore 0 and 1
  • Counting semaphore resource allocation

118
Semaphores
  • Busy waiting wasting CPU resources
  • Spinlock (semaphore) no context switching is
    required when the process is waiting on a lock
  • One solution a process executes P operation gt
    semaphore-valuelt0 gt block itself rather than
    busying waiting
  • Wakeup operation wait state gt ready state
  • P(S) value-- if (valuelt0) add this process to
    a list block
  • V(S) value if(valuelt0)remove a process P
    from list wakeup(P)

119
Semaphores
  • If the semaphore value is negative, the value
    indicates the of processes waiting on the
    semaphore
  • The waiting can be implemented by linked list, a
    FIFO queue (ensure bounded waiting), or???
  • The semaphore should be treated as a critical
    section
  • 1. Uniprocessor inhibited interrupt
  • 2. Multiprocessor alg 3 (SW) or hardware
    instructions

120
Semaphores
  • Deadlock
  • Indefinite blocking or starvation

P0 p(s) p(q) . . v(s) v(q)
P1 p(q) p(s) . . v(q) v(s)
Wait for v(s) from P0
Deadlock
Wait for v(q) from P1
121
7.5 Classical Synchronization Problems
  • The bounded-buffer problem
  • The readers-writers problem read-write conflict
    in database
  • The dining-philosophers problem

Homework exercises!!!
122
7.6 Critical Regions
  • Signal(mutex)..CS..wait(mutex)?
  • Wait(mutex)..CS..wait(mutex)?
  • Vshared T
  • region V when B(true) S(s1) gt while statement S
    is being executed, no other process can access
    the variable V

123
7.7 Monitors
  • Programming mistakes will cause malfunction of
    semaphore
  • mutex.V()
  • criticalsection() gt several processes may be
    executing in their CS
  • mutex.P() simultaneously!
  • mutex.P()
  • CS() gt deadlock will occur
  • mutex.P()
  • If a process misses P(), V() or both, mutual
    exclusion is violated or a deadlock will occur

124
Monitors
  • A monitor a set of programmer-defined operations
    that are provided mutual exclusion within the
    monitor (the monitor construct prohibits
    concurrent access to all procedures defined
    within the monitor)
  • Type of condition x.wait and x.signal
  • Signal-and-Wait Pgtwait Q to leave the monitor
    or another condition
  • Signal-and-Continue Qgtwait P to leave the
    monitor or other condition

125
Ch. 8 Deadlocks8.1 System Model
  • Resources types (e.g., printers, memory),
    instances (e.g., 5 printers)
  • A process must request a resource before using
    it and must release it after using it (i.e.,
    request gt use gt release)
  • request/release device, open/close file,
    allocate/free memory
  • What cause deadlock?

126
8.2 Deadlock Characterization
  • Necessary conditions
  • 1. Mutual exclusion
  • 2. Hold-and-wait
  • 3. No preemption
  • 4. Circular wait
  • Resource-allocation graph
  • Request edge P-gtR
  • Assignment edge R-gtP

127
Deadlock Characterization
  • If each resource has only one instance, then a
    cycle implies that a deadlock has occurred
  • If each resource has several instances, a cycle
    may not imply a deadlock (a cycle is a necessary
    but not a sufficient condition)

P1-gtR1-gtP3-gtR2-gtP1
No deadlock, why?
P1-gtR1-gtP2-gtR3-gtP3-gtR2-gtP1
P1, P2, P3 deadlock
128
8.3 Methods for Handling Deadlocks
  • Deadlock prevention
  • Deadlock avoidance (deadlock detection)
  • Deadlock recovery
  • Do nothing UNIX, JVM (leave to programmer)
  • Deadlocks occur very infrequently (once a year?).
    Its cheaper to do nothing than implement
    deadlock prevention, avoidance, recovery

129
8.4 Deadlock Prevention
  • Make sure the four conditions will not occur
    simultaneously
  • Mutual exclusion must hold for nonsharable
    resources
  • Hold-and-wait guarantee a process requests a
    resource, it does not hold any other resources
    (low resource utilization and may be starvation)
  • No preemptionpreempted resources of a process
    which requests a resource but cant get it
  • Circular wait impose a total ordering of all
    resource type, and processes request resources in
    an increasing order. WHY???

130
8.5 Deadlock Avoidance
  • Claim edge declare the number of resources it
    may need before request them
  • The OS will grant the resources to a requested
    process IF there has no potential deadlock (safe
    state)

Unsafe if assign R2-gtP2 a cycle
131
8.6 Deadlock Detection
  • Wait-for-graph
  • Detect a cycle O(n2) gt expensive

R3
R1
P3
P1
P3
P1
P2
P2
R2
132
8.7 Recovery from Deadlock
  • Process termination
  • Abort all deadlocked processes (a great expense)
  • Abort one process at a time until the deadlock
    cycle is eliminated
  • Resource preemption
  • Selection of a victim
  • Rollback
  • Starvation

133
Ch. 9 Memory Management9.1 Background
  • Address binding map logical address to physical
    address
  • Compile time
  • Load time
  • Execution time

FIG 9.1
134
Background
  • Virtual address logical address space
  • Memory-management unit (MMU) a hardware unit to
    perform run-time mapping from virtual to physical
    addresses
  • Relocation register -- FIG 9.2
  • Dynamic loading a routine is not loaded until it
    is called (efficient memory usage)
  • Static linking and dynamic linking (shared
    libraries)

135
Background
  • Overlays keep in memory only the instructions
    and data that are needed at any given time
  • Assume 1) only 150k memory 2) pass1 and pass2
    dont need to be in the memory at the same time
  • 1. Pass1 70k
  • 2. Pass2 80k
  • 3. Symbol table 20k
  • 4. Common routines 30k
  • 5. Overlay driver 10k
  • 12345210k gt 150k
  • Overlay1 1345130k overlay2 2345140k lt
    150k
  • (FIG9.3)

136
9.2 Swapping
  • Swapping memoryltgtbacking store (fast disks)
    (FIG9.4)
  • The main part of swap time is transfer time
    proportional to the amount of memory swapped (1M
    200ms)
  • Constraint on swapping the process must
    completely idle especially no pending IO
  • Swapping is too long standard swapping method is
    used in few systems

137
9.3 Contiguous Memory Allocation
  • Memory 2 partitions system (OS) and users
    processes
  • Memory protectionOS/processes, users processes
    (FIG9.5)
  • Simplest method divide the memory into a number
    of fixed-sized partitions. The OS keeps a table
    indicating which parts of memory are available
    and which parts are occupied
  • Dynamic storage allocation first fit (generally
    fast), best fit, and worst fit

138
Contiguous Memory Allocation
  • External fragmentation statistical analysis on
    first fit shows that given N blocks, 0.5N blocks
    will be lost due to fragmentation (50-percent
    rule)
  • Internal fragmentation unused space within the
    partition
  • Compaction one way to solve external
    fragmentation but only possible if relocation is
    dynamic (WHY?)
  • Other methods paging and segmentation

139
9.4 Paging
  • Paging permits noncontiguous local address space
    of a process
  • Frames divide the physical memory into
    fixed-sized blocks
  • Pages divide the logical memory into fixed-sized
    blocks
  • Addresspage-numberpage-offset page-number is
    an index into a page table
  • The page and frame sizes are determined by
    hardware.
  • FIG9.6, FIG9.7, FIG9.8

140
Paging
  • No external fragmentation but internal
    fragmentation still exists
  • To reduce internal fragmentation small-sized
    page but increase the overhead of page table
    entry
  • What about on-the-fly page-size support?
  • With page user ltgt address-translation hardware
    ltgt actual physical memory
  • Frame table OS needs to know the allocation
    details of the physical memory (FIG9.9)

141
Paging
  • Structure of the page table
  • Registers fast but expensive suitable for small
    entries (256)
  • Page-table-base register (PTBR) points to the
    page table (which resides in the main memory)
    suitable for large entries (1M) but needs two
    memory access to access a byte
  • Using associated registers or translation
    look-aside buffers (TLBs) to speed up
  • Hit ratio effective memory-access time (FIG9.10)

142
Paging
  • Protection
  • Protection bits one bit to indicate a page to be
    read and write or read only
  • Valid-invalid bit indicates whether the page is
    in the processs logical address space FIG9.11
  • Page-table length register (PTLR) to indicate the
    size of the page table a process usually only
    uses a small fraction of the address space
    available to it

143
Paging
  • Multilevel paging
  • Supporting large logic address space
  • The page table may be extremely large (32-bit
    page-size(4k212)page-table(1M220)
    4-bytes/page-tablegt4Mbytes)
  • FIG9.12, FIG9.13
  • How does multilevel paging affect system
    performance? 4-level paging4 memory accesses

144
Paging
  • Hashed page tables
  • handle page table Fig. 9.14
  • Clustered page table useful for sparse address
    spaces

145
Paging
  • Inverted page table
  • A page entrygt millions of entries gt consume a
    large amount of physical memory
  • Inverted page table fixed-link between entry
    (index) of the page table and the physical memory
  • May need to search the whole page table
    sequentially
  • Using hashed table to speed up this search
  • FIG9.15

146
Paging
  • Shared pages
  • Reentrant code is non-self-modifying code, it
    will never change during execution
  • If the code is reentrant, it can be shares
  • FIG9.16
  • Inverted page tables have difficulty implementing
    shared memory. WHY?
  • Work for two virtual addresses that are mapped to
    one physical address

147
9.5 Segmentation
  • Segment variable-sized (page fixed-sized)
  • Each segment has a name and length
  • Segment table base (starting physical address)
    and limit (length of the segment)
  • FIG9.18, 9.19
  • Advantage
  • 1. association with protection(HOW?) the
    memory-mapping hardware can check the
    protection-bits associated with each
    segment-table entry
  • 2. Permits the sharing of code or data (FIG9.19).
    Need to search the shared segments number

148
Segmentation
  • Fragmentation
  • May cause external fragmentation when all blocks
    of free memory are too small to accommodate a
    segment
  • Whats the suitable segment size?
  • Per segment for each process ltgt per segment for
    per byte

149
9.6 Segmentation with Paging
  • Local descriptor table (LDT) private to the
    process
  • Global descriptor table (GDT) shared among all
    processes
  • Linear address
  • FIG9.21

150
Ch. 10 Virtual Memory10.1 Background
  • Virtual memory execution of processes that may
    not be completely in memory
  • Programs size gt physical memory size
  • Virtual space programmers can assume they have
    unlimited memory for their programs
  • Increasing memory utilization and throughput
    many programs can be resided in memory and run at
    the same time
  • Less IO would be needed to swap users programs
    into memory gt run faster
  • Demand paging and demand segmentation (more
    complex due to varied sizes)

FIG10.1
151
10.2 Demand Paging
  • Lazy swapper never swaps a page into memory
    unless it is needed
  • Valid/invalid bit indicates whether the page is
    in memory or not (FIG10.3)
  • Handling a page fault (FIG10.4).
  • Pure demand page never bring a page into memory
    until it is required (executed one page at a
    time)
  • One instruction may cause multiple page faults (1
    page for instruction and several for data) not
    so bad because Locality of reference!

152
Demand Paging
  • EX three-address instruction CAB 1) fetch
    instruction, 2) fetch A, 3) fetch B, 4) add A, B,
    and 5) store to C. The worst case 4 page-faults
  • The hardware for supporting demand paging page
    table and secondary memory (disks)
  • Page-fault service 1) interrupt, 2) read the
    page, and 3) restart the process
  • Effective access time (EAT)
  • ma memory access time (10-200ns)
  • p the probability of page fault (0ltplt1)
  • EAT (1-p)ma ppage fault time
  • 100 24,999,900p (ma100ns, page
    fault time 25ms)
  • gt plt0.0000004 (10 degradation)gt
    1ma/2,500,000 to page fault

Disk 8ms ave latency 15ms seek 1ms transfer
153
10.3 Page Replacement
  • Over-allocating increase the degree of
    multiprogramming
  • Page replacement 1) find the desired page on
    disk, 2) find a free frame -if there is one then
    use it otherwise, select a victim by applying a
    page replacement algorithm, write the victim page
    to the disk and update the page/frame table, 3)
    load the desired page to the free frame, 4)
    restart the process
  • Modify (dirty) bit reduce the overhead if the
    page is dirty (means it has been changed), in
    this case we have to write this page back to the
    disk.

154
Page Replacement
  • Need a frame-allocation and a page-replacement
    algorithm lowest page-fault rate
  • Reference string page faults vs frames
    analysis
  • FIFO page replacement
  • Simple but not always good (FIG10.8)
  • Beladys anomaly the page faults increase as the
    increase of of frames!!! (FIG10.9)

155
Page Replacement
  • Optimal page replacement
  • Replace the page that will not be used for the
    longest period of time (FIG10.10)
  • Has the lowest page-fault rate for a fixed number
    of frames (the optimum solution)
  • Difficult to implement WHY? gt need to predict
    the future usage of the pages!
  • Can be used as a reference point!

156
Page Replacement
  • LRU page replacement
  • Replace the page has not been used for the
    longest period of time (FIG10.11)
  • The results are usually good
  • How to implement it? 1) counter and 2) stack
    (FIG10.12)
  • Stack algorithms (LRU) will not suffer from
    Beladys anomaly

157
Page Replacement
  • LRU approximation page replacement
  • Reference bit set by hardware indicates whether
    the page is referenced
  • Additional-reference-bits algorithm at regular
    interval, the OS shifts the reference bit to the
    MSB of a 8-bit byte (11000000 has been used more
    recently than 01011111)
  • Second-chance algorithm ref-bit1, gives it a
    second chance and reset the ref-bit uses a
    circular queue to implement it (FIG10.13)

158
Page Replacement
  • Enhanced second-chance algorithm
  • (0,0) neither recently used nor modified - best
    one to replace
  • (0,1) not recently used but modified - need to
    write back
  • (1,0) recently used but clean - probably will be
    used agaain
  • (1,1) recently used and modified
  • We may have to scan the circular queue several
    times before we can find the page to be replaced

159
Page Replacement
  • Counting-based page replacement
  • the least frequently used (LFU) page-replacement
    algorithm
  • the most frequently used (MFU) page-replacement
    algorithm
  • Page-buffering algorithm
  • Keep a pool of free frame we can write the page
    into a free frame before we need to write a page
    out of the frame

160
10.4 Allocation of Frames
  • How many free frames should each process get?
  • Minimum number of frames
  • It depends on the instruction-set architecture
    we must have enough frames to hold all the pages
    that any single instruction can reference
  • It also depends on the computer architecture ex.
    PDP11 some instructions have more than 1 word (it
    may straddle 2 pages) in which 2 operands may be
    indirect reference (4 pages) gt needs 6 frames
  • Indirect address may cause problem (we can limit
    the levels of indirection e.g., 16)

161
Allocation of Frames
  • Allocation Algorithms
  • Equal allocation
  • Proportional allocation allocating memory to
    each process according to its size
  • Global allocation allow high-priority processes
    to select frames from low-priority processes
    (problem? A process can not control its own
    page-fault rate)
  • Local allocation each process selects from its
    own set of frames
  • Which one is better? Global allocation high
    throughput

162
10.5 Trashing
  • Trashing high paging activity (a severe
    performance problem)
  • A process is trashing if it is spending more time
    paging than executing
  • The CPU scheduler decreasing CPU utilization gt
    increases the degree of multiprogramming gt more
    page faults gt getting worse and worse (FIG10.14)
  • Preventing trashing we must provide a process as
    many frames as it needs
  • Locality process executes from locality to
    locality

163
Trashing
  • Suppose we allocate enough frames to a process to
    accommodate its current locality. It will not
    fault until it changes its localities
  • Working-set model gt locality
  • Working-set the most active-used pages within
    the working-set window (period) (FIG10.16)
  • The accuracy of the working set depends on the
    selection of the working-s
Write a Comment
User Comments (0)
About PowerShow.com