Multi-tier%20Architectures%20 - PowerPoint PPT Presentation

About This Presentation
Title:

Multi-tier%20Architectures%20

Description:

The SAN controller along with its switches is known as 'the Fabric' (you'll see why) ... Together with its servers, a SAN is more of a 'Fabric' than a network. ... – PowerPoint PPT presentation

Number of Views:230
Avg rating:3.0/5.0
Slides: 48
Provided by: darylenie
Category:

less

Transcript and Presenter's Notes

Title: Multi-tier%20Architectures%20


1
Multi-tier ArchitecturesDistributed Databases
  • CP3410
  • Daryle Niedermayer, I.S.P., PMP

2
Topics
  • A history of database processing
  • Dumb Terminals Mainframes
  • Client-Server
  • Multi-tier Configurations
  • The Need for Reliability
  • New Hardware Configurations
  • E-commerce Considerations
  • Distributed Systems

3
A Brief History of Database Processing
  • Computers as a tool of modern business only took
    off in the late 1950s/early 1960s.
  • For the first 20 years (1960-1980) databases sat
    on a large mainframe computer. Users connected
    directly to the mainframe using dumb terminals.

4
(No Transcript)
5
What are Dumb Terminals?
  • They are a monitor and a keyboard and a network
    connection
  • There is no hard-drive, no CPU
  • They cant do work on their own
  • They know enough to connect to the mainframe
  • Data entered by a user is sent to the mainframe
    for processing
  • The mainframe sends the results back to the
    terminal to draw on the screen

6
  • They are a way for users to work on the mainframe
    while sitting in their own offices
  • All processing was done by the mainframe. The
    terminal was just an input/output device.

7
What were Dumb Terminals like?
  • Pros
  • Very fast (for its day)
  • Easy
  • Good enough for the amount of data required
    (which wasnt much)
  • Cons
  • Reports were simple not well formatted
  • Everyone got to watch a black screen with green
    printing all day.

8
Client-Server Architectures(aka 2-Tier
Architectures)
  • 1980-present
  • With introduction of smart workstations and PCs
    processing could be shared between the mainframe
    and the local terminal
  • Early workstations included SUN, PDP-11s (and
    other DEC PDP minicomputers)
  • Eventually IBM-386s, 486s, Pentiums

9
(No Transcript)
10
Server Roles
  • Store the data
  • Organize, index and manipulate data
  • Manage contention and data concurrency
  • Receive and process queries and other operations

11
Client Roles
  • Decide what operation to ask the server to
    perform
  • Display and format data

12
Some notes on Client-Server
  • The client has some smarts (unlike the dumb
    terminals)
  • Using software it decides what data it needs from
    the server.
  • It asks for that data and receives the results.
  • It formats or uses the results for further
    processing.
  • Examples MySQL Browser, MS-SQL

13
  • Pros
  • Shares the processing between the server and
    client
  • Both sides can play to their strengths
  • Only the data that is needed goes over the
    network
  • Cons
  • Takes more expensive hardware at the client end
  • Software can be more expensive (a copy for every
    workstation)

14
A note on MS Access
  • MS Access can look like a Client-Server, but it
    usually isnt.
  • Most of the time, the database sits on a
    fileserver, not a database server. This means
    that the entire file must be downloaded to the
    local machine before Access can use any of it.
    This is not the way for a client-server to behave!

15
However, it is possible
  • MS Access can be used as a client front-end with
    a full Database Server handling the server side.
  • MS-SQL or MySQL can serve as the back-end
    server.
  • MS Access on the client then connects to the
    server using an ODBC connection.

16
Multi-Tier Architectures(aka n-Tier
Architectures)
  • 1990-Present
  • Came with the birth of the Internet and TCP/IP
  • TCP/IP gives us a way for machines to communicate
    regardless of what application they are using
  • N-Tier means more than 2-Tier

17
Internet Applications
  • Internet Applications are almost always N-Tier
  • Need to be very scalable (quickly grow capacity)
  • Need to have high-availability (its always
    business hours somewhere around the world)
  • Need to have strong security

18
(No Transcript)
19
The Need for Reliability
  • In the previous slide, there were multiple
    Application Servers
  • This allows for
  • The system to respond to huge differences in
    traffic volumes
  • The system to still be available even if one
    server crashes
  • More servers can be added to meet demand

20
Other Redundancies
  • Although the diagram does not show it, additional
    firewalls and Proxy Servers can be added for
    redundancies as well.

21
High Availability Configurations
  • Hardware can be configured to have High
    Availability
  • HA means that the hardware itself will recover
    from a system problem without having to wait for
    human intervention.
  • Recovery typically takes under 15 seconds.

22
High Availability Appliances
  • Firewall Appliances and Proxy Servers usually
    have static configurations
  • Their content and configurations do not change
    often
  • Their content and configuration only change as a
    result of operator input

23
HA Firewalls
  • Both firewalls are powered on with identical
    configurations
  • A heartbeat signal is shared between them every
    few seconds
  • If the Standby Firewall does not get a heartbeat
    when expected, it takes over the IP address and
    traffic of the Active Firewall until an operator
    fixes the problem

24
HA Databases
  • HA Databases are much more difficult
  • How do you take over the data when it changes all
    the time?
  • How do you take over in the middle of a
    transaction?
  • How do you take over the data if it is on a hard
    drive inside a disabled server?

25
SAN to the Rescue
  • Storage Area Networks (SANs) store data outside
    of a server.
  • They are huge racks of disk drives that are
    connected to a SAN controller.
  • The SAN controller along with its switches is
    known as the Fabric (youll see why).
  • The SAN controller itself is also mirrored in a
    HA configuration.

26
  • Together with its servers, a SAN is more of a
    Fabric than a network.
  • Any failure is immediately recoverable through
    other connections

27
(No Transcript)
28
Capacity
  • A SAN can hold terabytes (1,000 Gb) or even
    petabytes (1,000,000 Gb) of data for dozens or
    even hundreds of servers at the same time.
  • SAN disks are usually configured in a RAID array
    so that disks are mirrored. This way, if one disk
    fails, the data is still on at least one other
    disk.
  • Connections usually Fibre-Optics rather than
    copper wires to ensure high bandwidth and
    transmission speeds.

29
Back to HA Databases
  • If we put our data on a SAN rather than on a hard
    drive inside the DB server, we can still access
    the data even if the DB server itself fails.
  • A Stand-by server then just takes over the Fabric
    connections of the sick server as well as its IP
    Address and Network connections.

30
HA Clusters
  • Because were not failing over everything (since
    the data is on the SAN), the DB servers only need
    enough disk space to boot themselves up.
  • We call this configuration a Cluster and each
    physical server is a Node in the Cluster

31
(No Transcript)
32
Other Advantages of Clusters
  • Multiple Database servers can provide load
    balancing for each other
  • We can even have 3 or more nodes with 2 or more
    active and the last one serving as a spare for
    any of the others
  • By manually switching in the Standby server, the
    Active Server can be upgraded without taking a
    system outage

33
E-Commerce Considerations
  • In planning an E-commerce system, you need to
    consider the following
  • If youre customers are all over the world, you
    can never unplug your system for maintenance
    without losing customers.
  • You need to manage transactional integrity across
    multiple Application Servers.

34
  • Transactions need to be managed across multiple
    web pages
  • During the first dozen pages, the user puts
    together their shopping cart
  • Then the user goes to check-out. This involves
    a few more pages as they input their identity,
    their shipping information, and their payment
    details.
  • What if they abandon the transaction? When do you
    rollback?

35
  • How do you protect customers data?
  • What personal information do you store about your
    customers in your database?
  • Do you store this information in the clear
    (plaintext) or encrypted so that no one else can
    make use of it if your system is cracked?
  • How do you protect your customers information
    from your own employees?

36
Distributed Systems
  • Imagine a Database Cluster that spans the globe
  • One node is in London
  • One node is in Tokyo
  • One node is in Toronto
  • One node is in Doha
  • This is a Distributed Database Management System
    or DDBMS

37
Why DDBMS?
  • Communications used to be expensive
  • Rather than have 1000 employees all over the
    world connect over a 56K modem to a DBMS in
    London, we would pay for high speed connections
    between each DBMS node and then have users
    connect to their local node (at cheaper rates)

38
For Example
  • A modem call over a telephone line from Doha to a
    non-GCC country costs about 0.90/minute.
  • If there are 100 users in Doha, this would cost
    90/minute or 5400/hour for these users to
    connect.
  • It may be cheaper to put a database server in
    Doha and then synchronize the data over a
    high-speed line.

39
Why This Doesnt Work
  • Outside of Qatar, international telephone line
    charges are now about 0.02/minute. For 100
    users, this works out to 120/hour which is
    certainly affordable if users need dial-up.

40
Why This Doesnt Work (2)
  • As well, High Speed Internet costs have also
    dropped
  • 512 Kbps (effectively about 380Kbps) costs about
    60USD/month in Qatar.
  • In Canada, 6,500 Kbps costs about 45USD.
  • So, its not a problem for everyone to connect to
    the database in London.

41
Why This Doesnt Work (3)
  • DDBMS also have a great deal of difficulty with
  • Synchronizing data How do you manage concurrency
    across thousands of miles and different networks
    and telephone companies? (You thought database
    locking on a local machine was hard)

42
  • Networking
  • DDBMS require very high speed networks. There is
    a lot of data to be synchronized constantly
  • DDBMS need very fault tolerant networks. Network
    paths between nodes need to be redundant and
    reliable
  • These networks are very, very expensive
  • Security How do you make sure the data is being
    transmitted between nodes securely?

43
  • Increased Storage You are copying data in every
    location. This requires duplicate hardware (SANs
    are not cheap) and a lot of extra disk space.
  • Increased demand for very specialized expertise.
    The knowledge of how to look after a DDBMS is not
    easy to come by. These people are in demand.

44
Where a DDBMS Makes Sense
  • When you can copy the same metadata across all
    systems but the actual data is geographically
    specific.
  • Eg. The customer and employee data for Qatar is
    stored in Doha and no where else the customer
    and emplyee data for Europe is stored in London
    and no where else. If an employee transfers from
    London, his record is physically moved from the
    London to the Doha database server.

45
Other uses of DDBMS
  • Disaster Recovery planning for some HA Financial
    Systems as well as public health and safety
    systems
  • Credit Card authorizations (Visa, MasterCard)
  • Banking Systems (ATMs)
  • Public Utilities (999 service and Telephone
    companies)
  • Air Traffic Control Systems

46
Assessment of DDBMS
  • There are very few reasons to have a DDBMS
  • They are expensive to set up and run
  • They have problems in managing data
    synchronization (making sure that all the data is
    up to date in all nodes)
  • There are usually better, cheaper options to
    share the data across a large geographic area.

47
Acknowledgements
  • SAN photographwww.nasi.com/images/IBM_SAN256M.jp
    g
  • SAN Configurationhttp//www.microsoft.com/librar
    y/media/1033/technet/images/itsolutions/wssra/ragu
    ide/storagedevices/igsdpg03_big.gif
Write a Comment
User Comments (0)
About PowerShow.com