PostKEK A new mail system using DCEDFS - PowerPoint PPT Presentation

1 / 29
About This Presentation
Title:

PostKEK A new mail system using DCEDFS

Description:

Design based on distributing system using. DCE (Distributed ... This mechanism cannot offer load valance among servers but, useful for high availability ... – PowerPoint PPT presentation

Number of Views:41
Avg rating:3.0/5.0
Slides: 30
Provided by: SHIB152
Category:

less

Transcript and Presenter's Notes

Title: PostKEK A new mail system using DCEDFS


1
PostKEKA new mail system using DCE/DFS
  • Akihiro Shibata
  • Akihiro.Shibata_at_kek.jp
  • Computing Research Center,
  • High Energy Accelerator Research Organization
    (KEK)

2
Contents
  • PostKEK system
  • Requirements
  • System Design
  • Status
  • Summary and Discussion
  • High Availability File service using DFS

3
System requirements
  • More than 1,000 users
  • Non-stopping service through year.
  • Security
  • services
  • POP (IMAP in the future)
  • Mail exchanger (out-going mail gateway)
  • Remote login
  • Home directory
  • Mailing-List

4
System Design
  • Design based on distributing system using
  • DCE (Distributed Computing Environment)
  • DFS (Distributed Files System)
  • High availability
  • Duplication of servers (SMTP, POP, telnetd, ...)
  • Higher availability file service by DFS
  • Application fail-over (sendmail mail spooling)

5
System components
  • 4-work stations
  • HITACHI 3500 (160MB memory)
  • OS HI-UX/WE2
  • RAID disk
  • HITACHI A-6531
  • 2 port controller
  • Duplicated electric supply units
  • 32Gbyte (2-arrays)
  • For spools and home directories (file service by
    DFS)
  • DCE
  • HI-DCE Executive (OSF DCE ver 1.1 base)

6
PostKEK
7
Why DCE/DFS?
  • security
  • Integrated login
  • No password encrypted data is shared in the DCE
    client
  • Long password is possible (lt128)
  • Access Control Lists (ACLs)
  • more flexible access control than UNIX
  • Even root of DCE/DFS clients has no privilege for
    cell administrations
  • No plain password is sent among DCE cell
  • availability
  • DCE server replication
  • Load balancing among replica of DFS

8
Why DCE/DFS ? (2)
  • Scalability
  • The resources are distributed and shared among
    many hosts.
  • Up-to hundreds hosts.
  • Multi platforms are supported
  • AIX, Solaris, HPUX, Digital UNIX, Irix
  • Uniform file access
  • DFS Backup (local file system)
  • Snapshot of the home directory is held
  • The files and directories are able to be
    recovered to those of time when snapshot was
    taken.

9
DCE/DFS servers
Mlserva dced (M) cdsd (M) secd(M) fl server
ltFLDBgt fxd (M) file exporter (prim)
Mlservb dced (S) cdsd (S) secd(S) fl server
ltFLDBgt fxd (S) file exporter (backup)
mail1 fl server ltFLDBgt
mail2
10
Mail servers
Mlserva DNS (M) SMTP (M)
Mlservb DNS (S) SMTP (S)
spooling
mail1 SMTP gateway POP3
mail2 SMTP gateway POP3
MX post.kek.jp mail1, mail2
11
Modification from UNIX system
  • Authentication
  • To access DFS user should be authenticated for
    DCE
  • Integrated login with DCE and UNIX
  • about 20-30 lines modification for each source
  • POP server,
  • ftp server,
  • login
  • qpopper (POP server)
  • Adding the file lock function to the source code
  • DCE login

12
High Availability services
  • DCE servers replication (security server, CDS
    server)
  • High Available file service by DFS
  • Home directories and mail spools
  • will be explained in detail later
  • Spooling mails
  • Swapping the SMTP servers. (application fail
    over)
  • Synchronized with swapping DFS server
  • POP server, telnet server, SMTP server
  • More than 2 servers

13
High available service server fail over
Malservb DFS stand-by SMTP stand-by
Malserva DFS file exporter SMTP(M) spool
Both DFS and SMTP servers are swapped
Malservb DFS file exporter SMTP(M) spool
Malserva DFS SMTP
14
Status of service
  • Starts at May 5th 1998
  • Users
  • 360 users at April 1st 1999
  • Mailing Lists
  • since December 1998
  • 20 lists
  • POP(2 servers)
  • 10000 accesses per day
  • 500 accesses per hour (peek)
  • login (2 servers)
  • 200 accesses per day
  • 30 users at the same time
  • 60 users per day
  • Mails
  • 2000 mails per day

15
Status of system running
  • Swapping server
  • It tooks at most 15 minutes for server swapping
  • 32 Gbyte disk (15 partitions)
  • about 800 filesets (RW, Backup)
  • and about a further few minutes for propagating
    to clients
  • depending on cache parameter of DFS and so on.
  • running status
  • clients 200 days without stopping
  • servers 100 days without stopping

16
Summary and Discussion
  • Stability and continuous services are required .
  • Mail system using DCE answers the requirements
  • security
  • The high available system
  • The file sharing in secure
  • Possible to add a lots of clients
  • High Availability File Service using DFS
  • Continuous service is possible even in
    maintenance.
  • Manipulated by hand.
  • It is helpful, if hot (automatic) fail-over is
    possible.

17
Discussion(2)
  • IMAP
  • The later Supporting on Japanese Language.
  • Almost 3 clients are available in the beginning
    of 1998
  • Netscape ver 4.0, Airmail, pine
  • Now, several mailers support on Japanese.
  • IMAP will be supported in near future.

18
Higher Availability (HA) File System with DFS
19
DFS fail over limitation
  • Read Only replica
  • One Read/Write(R/W) server and many Read Only(RO)
    servers
  • Load sharing among RO servers
  • Very useful for application service or read most
    file service such as web pages, but useless for
    home directory service.
  • In case of R/W server fail, one of backup RO
    replica could be a new RW server. But data
    consistency between old and new R/W server is not
    assured.
  • R/W replica functionality is not implemented in
    DFS yet.

20
HA File Service
  • Make down time of DFS file server shorter in
    trouble or in maintenance
  • Total fault tolerance or dynamic fail over
  • Prevent program failure or file damage in
    unexpected system down.
  • HA products using 2port disks exist on NFS, but
    not on DFS.

21
HA file service - How
  • Adopt two ports RAID
  • Multi port RAID is popular recently.
  • No data copy is needed between active and
    stand-by servers. (no data consistency or
    synchronization problem)
  • Relation between Fx server and served Fileset.
  • Seems to be defined by position of Serverentry
    in FLDB.
  • To change FileServer assignment for the fileset,
    it need just replacement of the Servername in the
    entry

22
DFS file access mechanism
  • Two DFS server provides.
  • File server
  • To store and export LFS and on-LFS data as
    filesets.
  • File location server
  • Stores FLDB (fileset location database) which
    contain the data of location of fileset.
  • FLDB contains information about fileset (name, ID
    number., physical location).
  • FILESET
  • Sub-tree related files and directories.
  • The mount point
  • the place where fileset is attached to DFS global
    filesystem
  • the name of fileset in which data resides, ( not
    physical location)

23
DFS file access mechanism
24
Commands
  • Server A side (Original File Server)
  • become cell_admin
  • dfsexport -agg ltagg1_Agt -detach -force
  • Server B (New File Server)
  • become cell_adminfts eds lthostAgt -ch
    lthostDummygt
  • fts eds lthostBgt -ch lthostAgt
  • fts eds lthostDummygt -ch lthostBgt
  • fts eds lthostAgt -prin hosts/lthostBgt
  • fts eds lthostBgt -prin hosts/lthostAgt
  • dfsexport -agg ltagg1_Bgt

hostB
hostA
fileset 1
agg1_A
agg1_B
25
FLDB entry
26
Usual Status
DFS server B
27
Server A maintenance
28
HA - result (1)
  • Works well.
  • Better than NFS HA.
  • In usual NFS HA, IP swap trick is used.Caching
    of ARP table (IP-MAC) in clients, bridges or
    routers sometimes makes problem .
  • Without dfsexport -detach
  • when the servers are swapped
  • File inconsistency between both servers were
    happened.
  • File damages may happen in server trouble.

29
HA - discussion
  • This mechanism cannot offer load valance among
    servers but, useful for high availability
  • moderately useful in the present status.
  • Very helpful if dynamic fail over becomes
    possible . (even if it cannot prevent
    application abort or damages of file being opened
    at the time of the server down.)
  • Dynamic fail over is our dream. (Then we will not
    be called in early morning for a server down)
Write a Comment
User Comments (0)
About PowerShow.com