Hidden Markov Model based Hierarchal Clustering for Autonomous Diagnostics PowerPoint PPT Presentation

presentation player overlay
1 / 18
About This Presentation
Transcript and Presenter's Notes

Title: Hidden Markov Model based Hierarchal Clustering for Autonomous Diagnostics


1
Hidden Markov Model based Hierarchal Clustering
for Autonomous Diagnostics
  • Akhilesh Kumar, Doctoral Candidate
  • Dr. R. B. Chinnam, Associate Professor
  • Department of Industrial Engineering
  • Wayne State University
  • Detroit, MI 48202, USA

2
Outline
  • Maintenance Techniques
  • Research Motivation
  • Research Objectives
  • Modeling Framework
  • Diagnostic Hierarchal clustering with similarity
    measure as Hidden Markov Model
  • Research Contribution and Future Research

3
Maintenance Techniques
  • Corrective Maintenance (CM) Action after failure
  • Preventive Maintenance (PM) Time based action
  • Conditioned Based Maintenance (CBM) Action if
    needed
  • CBM is a philosophy of monitoring the condition
    of machinery and performing the maintenance only
    when there is objective evidence of impending
    failure

Optimal Strategy!
CBM
1-5
Criticality
PM
15-25
CM
70-80
Equipment
4
Motivation
  • Annual DoD maintenance activities consumes over
    40 Billion
  • DoD Maintenance Policy, Programs Resources
    FACT BOOK, 2000
  • Maintenance costs at medium-sized power utility
    companies exceed operating profit
    Geibig, 1999
  • Annual saving potentials from large-scale
    deployment of effective CBM technology in US 35
    Billion Lee, 2003
  • Technical barrier to CBM's widespread
    implementation
  • (NIST-ATP CBM Workshop Report, 1998)
  • Inability of maintenance systems to learn
    identify impending failures

5
CBM Architecture
6
Research Objective
  • To develop generic yet robust methods for
    autonomous diagnostics
  • Scope Incipient Failures
  • Occur slowly
  • Possible to track development
  • Health state estimation

7
Diagnostics
  • Challenges
  • Most of the real world data are
  • Non stationary time series
  • Not enough examples to label the data
  • Multidimensional inputs and outputs
  • Solution
  • Hidden Markov Model based hierarchal clustering

8
Why HMM based Hierarchal Clustering?
  • State space models are better than classical
    methods
  • Do not suffer from finite window effects
  • Handles discrete and multi variate inputs and
    outputs with ease Murphy, 2002
  • Data
  • Problem Mostly not labeled
  • Solution Hierarchal Clustering
  • ( of cluster not needed as input)
  • Reduces computation of unneeded clusters
  • Weakness of distance based similarity measures
  • Problem Hard to find clusters with irregular
    shapes (Signature )
  • Problem Length of time window needs to be equal
  • Solution HMM based likelihood estimation

9
Hierarchal Clustering
  • Uses distance matrix as clustering criteria
  • Does not require the number of clusters as an
    input

a
a b
b
a b c d e
c
c d e
d
d e
e
Divisive (DIANA)
Step 3
Step 2
Step 1
Step 0
Step 4
  • Distance Measures
  • Minimum distance
  • Maximum distance
  • Mean distance
  • Average distance

10
Hidden Markov Model
  • An HMM is a stochastic finite automaton, where
    each state generates (emits) an observation
    Murphy, 2002
  • HMM is parameterized as
  • Initial state distribution
  • The state transition matrix
  • The observation model
  • Models are first-order Markov
  • The goal computing of the belief state

Xt Hidden state at time t X1, X2, X3, X4 Yt
Observation at time t Y1, Y2, Y3 aijĀ  State
transition matrix bijĀ  Output model
11
Proposed Algorithm
  • Step 1
  • Initialize
  • Select initial subsequence as training data
  • Step 2
  • Construct a node with 1 HMM
  • Train HMM with training data
  • Step 3
  • Calculate Log-likelihood for training data from
    trained HMM
  • Based on distribution of log-likelihoods
    construct mean-k sigma threshold
  • Step 4
  • Pass whole dataset through trained HMM and
    calculate log-likelihood
  • Step 5
  • Compare log- likelihoods with threshold and
    collect data points cutting the threshold
  • Step 6
  • Set subsequences cutting threshold as training
    data
  • Repeat step 2-5 until all data is exhausted

12
Experimental Test-bed
  • Experiment Setup
  • HAAS VF-1 CNC Machining Center
  • Kistler 9257B piezo-dynamometer
  • Signals
  • Thrust Torque Data (12 drill-bits)
  • 200-260 data points per hole
  • Initialization
  • of hidden states 6 (fixed)
  • of health state HMM 1 (fixed)
  • of observations 2 (Thrust and Torque)
  • Training data All 2nd holes for 12 drill-bits
  • State transition matrix 2nd hole of 1st
    drill-bit
  • K2 (Chebyshevs Rule)

Hole
13
Intermediary Results Log-likelihood (LL) Curves
14
Results 1st Health State Based on LL
HOLES
1st HMM
2nd HMM
15
Results Health States
HOLES
16
Conclusions and Future Research
  • Proposed algorithm is promising
  • Better computational performance owing to the
    fact
  • Less number of parameters to be initialized
  • Reduces computation of unneeded clusters
  • Time-window is not a problem

17
Future Work
  • Future Research Goals
  • Validating the model
  • Prognostics Remaining Useful Life estimation
    based on log-likelihood

18
Thank You!
Write a Comment
User Comments (0)
About PowerShow.com