operating system: controls and coordinates use of hardware among various applications and users
Application programs: define the ways in which the system resources are used to solve the computing problems of the users
User: people, machines, other computers
User view
cares:
convience
ease of use
good performance
don’t cares:
resource utilization
System view
OS is a resource allocator
Manages all resources
decides between conflicting requests for efficient and fair resource use
OS is a control program
Controls execution of programs to prevent errors and improper use of the computer
Computer-System organization
Von Neumann Model
an electronic digital computer with parts consisting of :
a processing unit containing an arithmetic logic unit and processor registers
a control unit containing an instruction register and program counter
a memory to store both data an instructions
external mass storage
input and output machanisms
Common Functions of Interrupts
an interrupt is an input signal to the processor indicating an event that needs immediate attention
interrupt transfer control to the interrupt service routine generally, through the interrupt vector, which contains the addresses of all the service routines
Interrupt architecture must save the address of the interrupted instruction
A trap or exception is a software-generated interrupt caused either by an error or a user request
An operating system is interrupt driven
Interrupt handling
the operating system preserves the state of the CPU by storing registers and the program counter
determines which type of interrupt has occurred:
polling
vectored interrupt system
separate segments of code determine what action should be taken for each type of interrupt
Storage structure
Main memory: only large storage media that the CPU can access directly
random access
typically volatile
Secondary storage: extension of main memory that provides large nonvolatile storage capacity
hard disks: rigid metal or glass platter covered with magnetic recording material
disk surface is logically divided int tracks, which are subdivided into sectors
the disk controller determines the logical interaction between the device and the computer
Solid state disk: faster than hard disks, nonvolatile
various technologies
becoming more popular
IO structure
No interrupts: after IO starts, control returns to user program only upon IO completion
wait instruction idles the CPU
Wait loop
At most one IO request is outstanding at a time, no simultaneous IO processing
Interrupt Driven: after IO starts, control return to user program without waiting for IO completion
system call: request to the OS to allow user to wait for IO completion
Device status table contains entry for each IO device indicating its type, address, and state
OS indexes into IO device table to determine device status adn to modify table entry to include interrupt
Device Driver for each device controller to manage IO. Provides uniform interface between controller and kernel
Direct Memory Access:;
Used for high speed IO devices able to transmit information at close to memory speed
Device controller transfer blocks of data from buffer storage directly to main memory without CPU intrevention
Only one interrupt is generated per block, rather than the one interrupt per byte
Computer System Architecture
Multiprocessor systems growing in use and importance
two types
Asymmetric multiprocessing: each processor is assigned a specific task
symmetric multiprocessing: each processor performs all tasks
Dual core design
Cluster systems
like multiprocessor system, but multiple systems working together
Loosely coupled systems:
ussually sharing storage viaa a storage area network
provides a high availability service which survives failures
Asymmetric clustering has one machine in hot standby mode
Symmetric clustering has multiple nodes running applications, monitoring each other
Some clusters are for high performance computing
Applications must be written to use parallelization
Operating System Structure
multiprogramming
single user cannot keep CPU and IO devices busy at all times
Multiprogramming organizes jobs so CPU always has one to execute
A subset of total jobs in system is kept in memory
One job selected and run via job schedulingLong term scheduling
when it has to wait, OS switches to another job
Timesharing: is logical extension in which CPU switches jobs so frequently that users can interact with each job while it is running, creating interactive computing
Interrupt driven
hardware interrupt by one of teh devices
software interrupt
Software error
request for operating system service
other process problems include infinite loop, processes modifying each other or the operating system
Dual mode
operation allows OS to protect itself and other system components
user mode and kernel mode
Mode bit provided by hardware
provides ability to distinguish when system is running user code or kernel code
some instructions designated as privileged, only executable in kernel mode
system call changes mode to kernel, return from call reset it to user
Increasingly PCUs support multi-mode operations
Virtual machine manager(VMM) mode for guest VMs
Timer
Timer to prevent infinite loop / process hogging resources
Timer is set to interrupt the computer after some time period
keep a counter that is decremented by the physical clock
operating system set teh counter
when counter zero generate an interrupt
set up before scheduling process to regain control or terminate program that exceeds allowed time
System boot
System call
system call
programming interface to teh services provided by the OS
most accessed by a program via a high-level API rather than direct system call use
Process management
Process
A process is a program in execution. It is a unit of work within the system. Program is a passive entity, process is an active entity
Process needs resources to accomplish its task
CPU, memory, IO, files
initialization data
Process termination requires reclaim of any reusable resources
single threaded process has one program counter specifying location of next instruction to execute
Multi-threaded process has one program counter per thread
Typically system has many processes, some user, some operating system running concurrently on one or more CPUs
Process Management Activities
scheduling processes and threads on the CPUs
Creating and deleting both user and system processes
Suspending and resuming processes
Providing mechanisms for process synchronization
Providing mechanisms for process communication
Memory Management
to execute a program, all of the instructions must be in memory
All of the data that is needed by the program must be in memory
memory management determines what is in memory and when
Optimizing CPU utilization and computer response to users
Memory management activiteis:
Keeping track of which parts of memory are currently being used and by whom
Deciding which processes and data to move into and out of memory
allocating and deallocating memory space as needed
Storage Management
OS provides a uniform, logical view of information storage
Abstracts physical properties to logical storage unit: file
each medium is controlled by a device
varying properties include access speed, capacity, data-transfer rate, access method
File system managemnt
Files usually organized into directories
access control on most systems to determine who can access what
OS activities include
Creating and deleting files and directories
primitives to manipulate files and directories
mapping files onto secondary storage
backup files onto stable storage media
Mass storage management
Usually disks used to store data that does not fit int main memory or data that must be kept for a “long” period of time
proper management is of central importance
entire speed of computer operation hinges on disk subsystem and its algorithms
OS activities
Free space management
storage allocation
disk scheduling
Some storage need not be fast
Tertiary storage includes optical storage, magnetic tape
still must be managed by OS or applications
Varies between WORM write one , read many times, and RWread write
Caching
important principle, performed at many levels in a computer
Information in use copied from slower to faster storage temporarily
faster storage checked first to determine if information is here
if cache hit: information used directly from cache
if cache miss: data copied to cache and used there
Cache smaller than storage being cached
cache management important design problem
cache size and replacement policy
Multitasking issue
Multitasking environments must be careful to use most recent value, no matter where it is stored in the storage hierarchy
multiprocessor environment must provide cache coherency in hardware such that all CPUs have the most recent value in their cache
Distributed environment situation even more complex
IO subsystem
One purpose of OS is to hide peculiarities of hardware devices from the user
IO subsystem consists of
Memory management of IO including bufferstoring data temporarily while it is being transferredcachingstoring parts of data in faster storage for performance, spoolingthe overlapping of output of one job with input of other jobs
general device driver interface
drivers for specific hardware devices
Conclusion
key ideas:
Virtualization: the OS takes a physical resource and transforms it into a more general, powerful and easy-to-use virtual form of itself
Concurrency: many problems arise, and must be addressed when working on many things at once in the same program
Persistence: the OS makes information persist, despite computer crashes, disk failures, or power outages
Protection: any mechanism for controlling access of processes or users to resources defined by the OS
security: defense of the system against internal and external attacks
Systems generally first distinguish among users, to determine who can do what
User identities include name and associated number, one per user
User ID then associated with all files, processes of that user to determine access control
Group identifier allows set of users to be defined and controls managed, then also associated with each process, file
privilege escalation allows user to change effective ID with more rights
Processes
program is passive entity stored on disk , but process is active
One program can be many processes // consider multiuser program
transfer from program to processes: loading, paging adn swapping, allocation of stack and heap, IO initialization
context switch
background
OS switching processes via timer interrupt
Context switch saving and restore context
process state transition
new: the process is being created
running; instruction are being executed
Waiting: the process is waiting for some event to occur
ready: the process is waiting to be assingned to a processor
Terminated: the process has finished execution
Process Control Block (PCB)
a process data structure record the machine state of the running program
operation on process
Fork(): system call create new processes
Exec(): system all used after fork() to replace the process memory space space with new program
interprocess communication
Shared memory
an area of memory sahred among the processes that wish to commnunicate
the communication is controlled under users process not the operating system
Message passing
Synchronized: blocking
Synchronized: non-block
Communication in client-server system
Pips
use pip system call
ordinary pip: can’t be accessed from outside process, usually used between parent process and child process
Named pipes: can be accessed without child-parent relationship
Threads
overview
threads
definition
a fundamental unit of CPU utilization that forms the basis of multithreaded computer system
multithreading models
Kernel/user threads
user thread
managed done by user-level thread library
kernel thread
supported by the kernel
diagram
Motivating kernel/user thread
advantage of user thread (disadvantage of kernel thread)
user thread is more efficient and kernel thread is less
disadvantage of user thread (advantage of kernel thread)
if only user thread, one thread blocked every other thread threads of the same processes are also blocked
the kernel can schedule thread on different processors
multithreading models
Many to one
one block cause all block
multiple thread may not run parallel on multicore system
used only on system which doesn’t support kernel thread
one to one
more concurrent than many to one
number of threads per process sometimes restricted due to overhead
many to many
allows operating system create sufficient number of kernel thread
two level model
it’s similar to many to many
it allows a user thread bounded to kernel thread
Thread library
Pthread APIs
Create thread API
int phread_create(phread_t* thread, const pthrad_atrr_t* attr, void* (*start_toutine)(void*), void* args)
//create a pointer to structure of this thread
//args are the arguments
//run start_routine and store in thread
1
2
3
4
thread completion
int pthread_join(phread_t* thread, void** value_ptr)
//complete thread
//return value_ptr
1
2
3
CPU scheduling
Basic concepts
Scheduling timing
CPU scheduoling decision decisions may take place when a process
switches form to running ro waiting state
switching form running to ready state
switch from waiting to ready
Terminate
Schedule criteria
schedule criteria
CPU utilization: keep the CPU as busy as possible
throughput: how many processes complete their execution per time unit
waiting time: amount of time a process have been in waiting in ready queue
response time: amount of time it takes form when a request is submit until the first response is produce
Fairness
Scheduling algorithm
1. First Come First Serve (FCFS)
first come first serve
Shortest Job First (SJF)
the shortest job first then the next shortest
Higest Respose Ratio Next (HRRN)
the next job is not that with the shrtest estimated runtime, but that with the heighest response ratio
R
e
s
p
o
n
s
e
R
a
t
i
o
=
1
+
w
a
i
t
i
n
g
t
i
m
e
e
s
t
i
m
a
t
e
d
r
u
n
t
i
m
e
Response\ Ratio = 1 + \frac{waiting\ time}{estimated\ run\ time}
ResponseRatio=1+estimatedruntimewaitingtime
Preemptive SJF
Compare SJF, if A is shorter than B, and come later than b, A is preemptively executed
Priority
basic idea
the higher priority the earlier executed
shrtcomes and solution
starvation
definition: Low priority processes may never execute
Solution: aging(as time progresses increase the priority)
Round Robin (RR)
runs a jog in a time slice and ten switch it to the next job in the ready queue
Multilevel queue
ready queue is partitioned into separate queues
process permanently in a given queue
each queue has its own scheduling algorithm
scheduling must be done etween queues
Multilevel feedback queue
if priority A > priority B, A runs and B doesn’t
If priority A = priority B, A ad B run in RR
When a job enters the system, it is placed at the heighest priority
If a job use up an entire time slice, its priority is redued
if a job give up the CPU before the time slice si up, it stays at the same priority(to optimize it’s aborted)
After some time period S, move all the jobs in the system to the tipmost queue
Gantt cahrt
could draw
Process synchronization
The critical section problem
critical section
process may be changing common variables, updating dtable, writing files, etc.
when one process in critical section, no other may be in critical section
solution to critical-section problem(three requirement)
Mutex Exclusion: If process
P
i
P_i
Pi executing in its critical section, then no other processes can be executing in their critical section
Progress: If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely
Bounded waiting: a bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is grant
Assume that each process executes at a nonzero speed
no assumption concerning relative speed of the n processes
Peterson’s solution
Peterson’s solution
Synchronization hardware
test and set
instruction
example
compare and swap
definition
implementation
solution using test and set
Semaphore
semaphore
basic idea
Classical problems of synchronization
the bounded-buffer problem
problem
Solution
the reader-writer problem
problem
a number of operations includes reads and writes
no readers should be kept in waiting unless a writer has already obtained permission to use the share object
solution1 (which might cause the writer starve)
solution: write first
the dining-philosopher problem
problem
solution 1, 2
solution 3
if and only one phelophophy request two chopsticks it can get them
monitors
condition variables
problem
spin lock
condition variables
monitors
the monitor class guarantees that only one thread can be active within the monitor at a time
Deadlock
system model
System consists of resources:
R
1
,
R
2
,
R
3
…
R_1, R_2,R_3\dots
R1,R2,R3…
Each resources type
R
i
R_i
Ri has
W
i
W_i
Wiinstances
Deadlock characterization
Four necessary condition
Mutex exclusion: only one process at a time can use a resource
Hold and wait: a process holding at least one resource is waiting to acquire additional resources held by other processes
no preemption: a resource can be release only voluntarily by the processes holding it, after that process has completed its task
Circular wait: there exists a set
P
0
,
P
1
…
P
n
−
1
{P_0,P_1\dots P_{n-1}}
P0,P1…Pn−1 of waiting processes such that
P
0
P_0
P0 is waiting for a resource that is held by
P
1
…
P
n
−
1
{P_1\dots P_{n-1}}
P1…Pn−1 which is waiting for a resource held by
P
n
P_n
Pn, and
P
n
P_n
Pn is waiting for a resource held by
P
0
P_0
P0
Resource-allocation graph
use vertices to represent a process and an edge to represent an request
use vertices to represent a resource and an edge to represent an assignment
use circle to determine whether there is a deadlcok
if no circle: no deadlock
contains circle(s): and only one instance per resource: a deadlock
contains circle(s): not only one instance per resource: possibility deadlock
require process to request an be allocated all its resources before it begin its execution
no preemption
lock(L1);if(L2 is preemptive){unlock(L1);}
1
2
3
4
5
Mutual exclusion
if concurrent then it’s inevitable
deadlock avoidance
safe state
When a process requests an available resource, system must decide if immediate allocation leaves the system in safe state
a system is in safe state if there exists a sequence
<
P
1
,
P
2
…
P
n
>
<P1,P2…Pn> of all the processes in the system such that for each
P
i
P_i
Pi, the resources that
P
i
P_i
Pi cam still request can be satisfied by currently available resource + resources held by all the
P
j
P_j
Pj, with j
Resource-allocation graph alg.
claim edge converts to a request edge when a process request a resource
request edge converted to an assignment edge when the resource is allocated to the process
when a resource is released by a process assignment edge reconverts to a claim edge
resources must be claimed a priori in the system
the request can be granted only if converting the request edge to an assignment edge does not result in the formation of a cycle in the resource allocation graph
the banker alg.
assumption
each process must a priori claim maximum use
when a process requests a resource it may have to wait
when a process gets all its resources it must return them in finite amount of time
safety clg.
find a safe sequence
Resource-request alg.
if safe, then request are granted,
dead detection
Wait-for graph alg.
search a cycle in the wait-for graph, if contains a cycle, then a deadlock, else no deadlock
detection alg.
if all edges can be reduced, then there is no deadlock
Deadlock recovery
select a victim
rollback
starvation
Main memory
Background
Program loading and linking
program must be brought into memory and placed within a process for it to be run
Main memory and register are only storage CPU can access directly
Memory only sees address + read(data) request
Register access in one CPU clock
Main memory can take many cycles casing a stall
Cache sits between main memory and CPU register
address binding
Source code often use symbolic
Compiled code bind to relocatable address
Linker or loader will bind relocatable addresses to absolute addresses
each binding maps one address space to another
Logical/physical address space
logical address
generated by CPU
also referred to as virtual address
physical address
address seen by the memory unit
relationship
logical address and physical address are identical in compile-time and load-time address binding schemes logical and physical addresses differ in execution-time address binding scheme
swapping
swapping
page tables may still be too big
Some system place such page table in kernel virtual memory thereby allowing the system to swap some of these page table to disk
contiguous allocation
Base/limit
physical address = virtual address + base address
address translation
free space management
best fit
worst fit
first fit
next fit
Segmentation
segmentation
have a base and bounds pair per logical segment of teh address space, instead of having just one base and bound pair in the MMU
paging
paging
segmentation
chop up space into different-size chunk
the space itself can become fragmented
paging
chop up the space into fixed-sized pieces
the physical memory can be view as an array of fixed-sized slots called page frames; each of these frames can contains a single virtual-memory page
Translation Lookaside Buffer (TLB)
part of the chip’s memory-management unit(MMU)
simply a hardware cache of popular virtual-to-physical address translation
Page size issues
1. consideration
fragmentation
table size/ page fault
I/O over head
locality
Structure page table
paging and segments
one page table per logical segment
hierarchical page tables
basic idea
chop up the page table into page-size units
If an entire page of page-table entries is invalid, don’t allocate that page of the page table at all
a page directory is used to track whether a page of the page table is valid. It tells where a page table is, or that the entire page of the page of the page table contains no valid pages
page directory table
Virtual Memory
Background
Virtual memory
shared library using virtual memory
Demand paging
bring a page in into memory only when it’s needed
similar to paging system with swapping
lazy swapper: never swaps a page into memory unless page will be needed
Features
bring entire process into memory in load time
bing a page into memory when it is needed
less I/O needed
less memory needed
faster response
more users
Similar to paging system with swapping
page is needed
lazy swap: never swap a page into memory unless it’s needed
basic concepts
With swapping, paper guesses which pages will be used before swapped out again
papers only bring in those pages in memory
if pages needed are alreaded memory resident:
nodifferent from non demanding-pagin
if page needed and not memory resident
Valid-invalid bit
if it’s in memory
initially set to invalid on all entries
in MMU translation, if valid-invalid bit in page table entry, then i
→
\rightarrow
→ page fault
Page fault
if there is a reference to a page, first reference to that page will trap to operating system
operating system looks at an internal table to deside:
Invalid reference:abort
just not in memory
find free frame
swap page into fram via scheduled disk operation
reset table to indicate page now in memory
restart the instruction that caused the page fault
Process
operating system looks at an internal table to decide:
invalid reference: abort
just not in memory
find free frame
swap page into frame via scheduled disk operation
reset table into indicate page now in memory (set valid bit = v)
restart the instruction that cause the page fault
Handling
effective access time
p
is teh page fault rage and
0
≤
p
≤
1
E
A
T
(effect access time)
=
(
1
−
p
)
T
m
e
m
o
r
y
a
c
c
e
s
s
+
p
×
(
T
p
a
g
e
f
a
u
l
t
o
v
e
r
h
e
a
d
+
T
s
w
a
p
p
a
g
e
o
u
t
+
T
s
w
a
p
p
a
g
e
i
n
)
p\text{ is teh page fault rage and}0\le p \leq 1\\ EAT\text{(effect access time)} = (1-p)T_{memory\ access}+p\times (T_{page\ fault\ overhead} + T_{swap\ page\ out}+T_{swap\ page\ in})
p is teh page fault rage and0≤p≤1EAT(effect access time)=(1−p)Tmemoryaccess+p×(Tpagefaultoverhead+Tswappageout+Tswappagein)
aspect of demand paging
Extreme cases
start process without page in memory
Os sets insgruction pointer to the first pointer of the process, non-memory-resident
→
\rightarrow
→ page fault
and for every other process pages on first access
pure demand paging
actually
a given instruction could access multiple pages
⇒
\Rightarrow
⇒ multiple page fault
consider fetch and decode of instruction which adds 2 numbers from memoru and stores result back to memory
pain decreased becaused of locality of reference
Hardware support
page table with valie-invalid bit
secondary memory(swap device with swap space)
instruction restart
Performance of demand page
stage in demand paging
Trap to the operating system
save the user rigister adn process state
determine that the interrupt was a page fault
Check that the page reference ws lwgal and determine the location of the page on the disk
issue a disk read form the disk to a free frame
wait in a queue for this device until the read request is serviced
wait for the device seek and/or latency tiem
begin the transfer of the page to a free frame
while waiting, allocate the CPU to some other users
Receive an interrupt from the disk
save the resigeters and precess state for the other user
Determine that the interrupt was form the disk
correct teh page table and other tables to show page is now in memory
wait for the CPU to ve allocated to this process again
resotre the user registers, process state, adn new page table, adn then resume the interrupted instruction
Besides the context switch
three major activities
servide the interrupt
read the page
restart the process
Page fault rate
0
≤
p
≤
1
0\le p \le 1
0≤p≤1
if p = 0, no page fault
if p = 1, every reference is fault
3. Effective Access time
demand paging optimization
swap I/O is faster than file system I/O
Dopy entire process image to swap spaceat rpocess load tiem
demand page in from program binary on disk, but discard rather than paging out when freeing frame
Mobile system: typically don’t support swap instead reclaim read-only pages
Inverted page table
invert
Copy on write
Allows both parents and child processes to initially share the same pages in memory
COW allows more effecient process creation as only modified pages coped
In general, free pages are allocated from a pool of zero-fill-on-demand pages
v
f
o
r
k
(
)
vfork()
vfork() Use COW rather than fork() doesn’t
Diagram
page replacement
1. what ahppens when there is no free frame
Used up by the process pages
also in demand from the kernel, I/O buffers, etc
Page replacement: some page in memory, but not really use
prevenrt over-allocation of memory by modifying page-fault service routine to include page replacement
Use modify(diet)bit to reduce overhead of page transfer
page replacement completes separation between logical memory and physical memory
need for page replacement
3. Basic page replacement
find the location of the desired page on disk
find a free frame
if there is a free frame, use it
if there is no free frame, use a page replacement algorithm to select a victim frame
write victime frame to disk if dirty
bring the desired page into the free frame, update the page and frame tables
continues the process by restarting the instruction that caused the trap
page and frame replacement algorithm
frame allocation algorithm determines
how many frames to give each processes
which frames to be replaced
page replacement algorithm
Want lowest page-fault rate on both first access and re-access
Evaluate algoritm
running on a particular strign of memory references and computing reh numver of page fault
string is just page numbers, not full address
repeated access to the same page doesn’t cause a page fault
result depend on number of frames avaiable
optimal algorithm
basic idea
replace page that will not be used for longest period of time
First in first out
How to track
use a FIFO queue
Least recently used(LRU) algorithm
Basic idea
use the history knowledge
replace page that has not been used in the most ammount of time
Implementation
every page entry has a counter
when a page needs to change, look into the counter
only back store the dirt one
possible keep some free frame
Advantage and disadvantage
It needs special hardware
Optimize
add addtional reference bit
LRU approximation algorithm
basic idea
generally FIFO, olus hardware-provide reference bit
if reference bit = 1 then repalce it, else set the reference bit = 0, and check the next frame
basic idea of reference bit
with each page associate a reference bit
when page is referenced, set the bit to 1
replace any with reference bit = 1
basic idea of additional reference bit
use 8-bit for each page instead 1 bit
replace the lowest number
basic idea of second chance algorithm
generally FIFO, plus hardware-provided reference bit
if reference bit = 0, then replace it
if reference bit = 1, then set the reference bit to 0 and subject the same rule to the next page
Enhanced second-chance algorithm
Basic idea
Improve algorithm by using reference bit and modify bit
Most frequently used(MFU)
compare to LRU
Page buffering algorithm
Keep a pool of free frame
keep list fo modify frame
Advantage
reduce victimes
Random algorithm
simply pick a random page to replace
workload example
no locality workload
20-80 locality workload:80% reference casud by 20%
loop workload
allocation of frames
equal allocation
If there are 100 frames, and 5 processes, and 20 frames one processes
proportional allocation
Allocation accoring to the size of process;
m
=
64
,
s
1
=
10
,
s
2
=
20
,
m
1
=
10
30
×
64
,
m
2
=
20
30
×
64
m=64, s_1=10,s_2=20,m_1=\frac{10}{30}\times64, m_2=\frac{20}{30}\times64
m=64,s1=10,s2=20,m1=3010×64,m2=3020×64
priority allocation
use priority instead of promotion
global/local replacement
global replacement: process selects a ted framesreplacement frame from the set of all frame
local replacement: each process selsects from only its own set of alloca
Trashing
trashing
a process is busy swapping page in and out
Work-set model
If total working set size > working set window, then trasing
Policy: swap out or suspend one of the process
Page fault frequency
page fault frequency is more direct
if rate too low, then loose frame. //not good for multiprogramming
if too high then gain frame. //trashing
Mass storage system
Objectives
Disk structure
Disk interface
atomic unit is 512B
Most program read 4K or more once
disk io
T
I
/
O
=
T
s
e
e
k
+
T
r
o
t
a
t
i
o
n
+
T
t
r
a
n
s
f
e
r
T_{I/O}=T_{seek}+T_{rotation}+T_{transfer}
TI/O=Tseek+Trotation+Ttransfer
R
I
O
=
S
i
z
e
T
I
O
R_{IO}=\frac{Size}{T_{IO}}
RIO=TIOSize
Seek time
time of arm move to the correct track
T
ˉ
s
e
e
k
∼
1
3
T
ˉ
f
u
l
l
s
e
e
k
\bar{T}_{seek}\sim \frac{1}{3}\bar{T}_{full seek}
Tˉseek∼31Tˉfullseek
Track skew
to ensure the sequential read
some other detail
Multi zone
Cache
disk scheduling
first come first serve
first come first serve
shorest seek time first
shortest seek time first
SCAN algorithm
the head starts at one end of the disk, and move towards the other end
C-SCAN
the head move from one end of the disk to the other serving requests as it goes, when it reach the other end, it immediately return to the begin of the disk without serving any requests on the return trip
LOOK
it’s like SCAN, but it reverse when encounter the last request in the direction
C-LOOK
it’s like SCAN, but it reverse when encounter the last request in the direction, when return to the end, it doesn’t serving any request
File System
File-system Interface
inode number
inode 的唯一编号(索引)
File-system implementation
File organization
FCB/inode
file control block: contains some information of the file
index node which contains some information of the file
allocation method
contiguous
each file occupies a set of contiguous blocks on the disk
linked
each file is linked list of disk block
indexed allocation
bring all pointer together into the index block
typical file system
1. overall organization
Divide the disk into blocks which are commonly 4K
reserve a fix portion of the disk for data region
to track information about files, the iPod are stored in inode(index node) table
to track whether a inode is free or allocated, an inode bitmap and a data bitmap are required
the remaining block is teh superblock which contains things like how many data blocks are in the file system
an inode in unix FS is a file control block
2. fast file system
1. File Allocation Table (FAT)
2. New Technology File System (NTFS)
3. Allocation method // very very important /smirk