Thoughts on Unix - PowerPoint PPT Presentation

1 / 10
About This Presentation
Title:

Thoughts on Unix

Description:

Designed from the start to be multitasking and multiuser very natural to start ... are available, e.g. Bourne shell, C shell, Korn shell, Bourne again shell. ... – PowerPoint PPT presentation

Number of Views:52
Avg rating:3.0/5.0
Slides: 11
Provided by: michae64
Category:
Tags: korn | thoughts | unix

less

Transcript and Presenter's Notes

Title: Thoughts on Unix


1
Thoughts on Unix
  • Comes in various flavours e.g. Solaris, Linux,
    etc.
  • Designed from the start to be multitasking and
    multiuser very natural to start another
    process all resources have owners and access
    permissions
  • Input-output is treated as a stream of bytes
    doesnt matter where it comes from or is going to
    facilitates using output of one program as
    input to another program
  • Think of system as providing a set of tools,
    which can be easily used together to get
    something done. Very useful tools come with the
    system, e.g. grep, awk, etc.
  • Command line interface is called the shell
    various versions are available, e.g. Bourne
    shell, C shell, Korn shell, Bourne again shell.
    Commands are case sensitive, which can come as a
    surprise.
  • All shells provide much the same general
    functions, plus some quirks of their own. cf
    http//rhols66.adsl.netsonic.fi/era/unix/shell.htm
    l for a good tutorial written by Bourne himself.
    Script files containing shell commands are easy
    to set up and run.
  • Various Graphical User Interfaces are available,
    e.g. CDE, GNOME but its well worth being
    familiar with using a shell.
  • Command manual is available on-line, use man
    command

2
Unix users and access permissions(1)
  • All resources (e.g. files, processes) have an
    owner, and an owner group.
  • At least one account on a UNIX system is a
    superuser or system manager, able to do
    anything with any resource, overrules the
    permissions.
  • There are three other types of beings who can
    access a resource the owner of the
    resource others in the same group as the
    owner (does not include owner) others in world
    (does not include group or owner)
  • Each resource has 12 bits specifying access
    permissions. Nine of these are used to determine
    read,write,execute access for each of the three
    types of beings above
  • e.g. 110 100 100 would allow the owner to read
    and write a file, and everybody else to read it
    but not change it. These permissions can be set
    on a file using chmod 644 filename (note the use
    of one digit in the range 0 to 7 for each of the
    blocks of three bits, rather than binary). The
    result can be checked using ls l filename, which
    should give something like -rw-r--r for the
    permission on this file.
  • Of course, to access a file, it must be possible
    to access its parent directory, and any other
    directories in the path to it.
  • The chmod command has various user friendly
    options these can be checked using man chmod

3
Unix users and access permissions(2)
  • The access permissions for owner,group and world
    use up nine of the twelve access permission bits.
    So what about the other three?
  • Consider this problem A program belongs to Tom,
    and uses a data file that also belongs to Tom.
    Tom wants everyone to be able to use the program,
    but not to have direct access to the data file.
  • He sets the owner, group and world access on the
    the program to 1, so everyone can execute the
    program. But how should he set the access on the
    data file?
  • If he gives group or world read access to the
    file, they can e.g. copy it. Worse again if the
    program makes changes in the file if given
    write access users will be able to directly
    change the file.
  • If he doesnt, when you go to use the program,
    and it goes to use the file, your process (which
    has your user id, and is running the program)
    will be unable to access the file.
  • The solution is to change the user id of your
    process while Toms program is running so that it
    is the same as the owner id of Toms program.
    Your process can access Toms data file, but only
    when using Toms program.
  • The first of the missing bits, SUID - Set user
    id, causes this to happen
  • The second one. SGID set group id, changes
    the process group id

4
Unix users and access permissions(3)
  • The sticky bit is the last of the three
    missing bits
  • Consider this problem A group of people are
    adding files to a directory, which naturally has
    write access for all the members of the group.
    But this write access to the directory means that
    anyone in the group can change the directory
    contents, and so can e.g. delete a reference to a
    file from the directory, even though the file may
    belong to someone else.
  • Setting the sticky bit prevents this now any
    attempt to change the directory which involves a
    file belonging to someone else will be blocked.
  • The three missing bits can be set using chmod as
    before, e.g. chmod 4711 filename sets the SUID
    bit, so that when filename is executed, the
    process id is temporarily changed to the owner id
    for filename
  • Its worth looking at how ls l shows that the
    SUID bit is set.
  • (try some experiments with a temporary file, also
    use man ls )

5
exec and fork
  • These are calls to the Unix kernel that programs
    in a Unix environment can use to start new
    processes.
  • exec(program_name) sets up a process to run
    program_name, in effect killing the old process
    running the program that made the call. In
    practice what tends to be killed is the old
    program, which is replaced by the new one,
    complete with new context, but the current
    process control block is reused and the process
    id is not changed.
  • child_id fork() creates a new copy of the
    existing process, running the same program.
    However, the value of child_id will be the
    different for the original process it will be
    the id of the new process that has just been
    created, whereas for the new process it will be
    zero. This allows the two branches of the fork to
    behave differently, even though they are both
    running the same program initially.
  • When a program wants to have another program
    running in parallel, it will typically first use
    fork, then the child process will use exec to
    actually run the desired program.
  • A very good description of fork and exec can be
    found at http//www.cse.ucsc.edu/sbrandt/courses/
    Spring00/111/slides/fork.pdf
  • Note these are system calls, for use when
    programming in e.g. C. Some shells have an exec
    command, but this can kill the shell, so take
    care.

6
When Unix starts
  • At startup, after installing device drivers etc.,
    Unix sets up a process running the init program
  • init checks the connected terminals, and for each
    of them uses fork and exec to spawn a process
    running getty. These put up a message on each
    screen, and wait for some action
  • When the user goes to login, getty uses exec to
    spawn a process running login. Using exec means
    that at this point the getty process dies.
  • login checks the password, and if everything is
    ok, uses exec to start a process running whatever
    shell is specified for use at login. Again, using
    exec means that the the login process dies. The
    user is now interacting with the process running
    the login shell this will be an ancestor of all
    further processes started by the user.
  • The login shell starts by reading reading various
    environment settings and user profiles and
    setting up the environment .
  • The shell has some built-in commands, e.g. cd. It
    handles others by using fork to start a process
    running a copy of itself, which then uses exec to
    run the command - the copy process dies, leaving
    a process running the command. The shell itself
    typically goes into the background, returning to
    the foreground when the new process dies on
    command completion.
  • The command can be executed in background using

7
Unix Job Control
  • Job Control - managing the various processes you
    have running
  • These can be useful
  • ps, ps -A show user processes, show all
    processes
  • jobs lists jobs and their job numbers
  • any_shell_command runs the command in the
    background
  • ctrl_c kills the current foreground job
  • ctrl_z suspends the current foreground job
  • stop job_number suspend that job
  • kill job_number kill that job
  • bg job_number resume a stopped job, but in
    background
  • fg job_number put a background job into the
    foreground
  • If you start a job in the foreground, and now
    want it to run in the background, you can use
    ctrl_z to suspend it, then bg to resume it in
    the background.

8
Unix pipes, filters and
  • Because input-output is always regarded as a
    stream of bytes, it is easy to hook the output of
    one program to the input of another.
  • This can be done using a pipe notionally the
    first programs output goes in one end of the
    pipe, and the second program takes its output
    from the other end. The operating system sets up
    the pipe, and synchronises the two programs. The
    pipe is indicated by , e.g. ls grep a shows
    all the directory entries that contain a.
    Instead we could have used ls temp followed by
    grep a temp, but the pipe does away with the need
    for the temporary file, and allows execution of
    the two programs be interleaved.
  • Programs used with pipes typically get all their
    input from the pipe, process it, and pass it on
    they dont need to stop for user input for
    example. Such programs are called filters.
  • Unix provides a lot of filters, which can be
    piped together to provide quite complex
    processing.
  • Above, we used to direct the output of ls to
    the file temp, instead of to the standard output.
    This overwrites the file if we want to append
    to the end of an existing file, we can use
    instead.
  • We can also get input from a file instead of from
    the keyboard by using
  • The use of re-direction in this way can be
    particularly useful in shell scripts.

9
Unix shell scripts (1)
  • A shell script is just a text file containing
    shell commands, and comments (which typically
    start with note the space)
  • The script can be fed as input to a new instance
    of the shell, e.g. bash myscript
  • or it can be executed directly as if it were a
    command, e.g. myscript
  • But watch that the access permissions are o.k.
  • Parameters can be fed into the script, e.g.
    myscript tom dick harry
  • Inside the script, these can be referenced by
    1,2 and 3, with giving the count of
    parameters input
  • The usual control structures can be used in the
    script, such as while and for loops, switch , if
    then else and so on.
  • Check the Bourne tutorial referenced earlier,
    also the tutorial at http//www.tldp.org/LDP/abs/h
    tml/
  • So shell scripting is quite like programming in
    any high level language
  • Well, not quite

10
Unix Shell scripts (2)
  • Some differences between shell scripts and high
    level languages
  • Shell scripts, like shell commands, are
    interpreted, not compiled. Each command in a
    script is translated, then executed, and if the
    command is in a loop, this translation is redone
    each time around the loop. A high level language
    is typically translated just once,i.e. compiled,
    and a separate file used to hold the executable
    code. Translation is completed before any
    instruction is carried out, and is not repeated
    when e.g. an instruction occurs in a loop. The
    compiled code can also be optimised on a global
    basis, again making the compiled approach better
    for compute intensive tasks.
  • In high level languages, parameters are
    typically passed by value or by reference. With
    scripts, they are passed by name the actually
    parameter name is inserted into the script. This
    can make a difference in some cases.
  • Because shell scripts are interpreted, it is easy
    to check the effects of modifications they can
    be tried out immediately, without compiling and
    linking.
  • In script programming, there is great
    flexibility in the use of built in filters,
    pipes, and in spawning new threads easier than
    in a high level language.
Write a Comment
User Comments (0)
About PowerShow.com