Modeling, Clustering, and Segmenting Video with Mixtures of Dynamic Textures

A dynamic texture is a spatio-temporal generative model for video, which represents video sequences as observations from a linear dynamical system. This work studies the mixture of dynamic textures, a statistical model for an ensemble of video sequences that is sampled from a finite collection of vi...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on pattern analysis and machine intelligence Vol. 30; no. 5; pp. 909 - 926
Main Authors Chan, A.B., Vasconcelos, N.
Format Journal Article
LanguageEnglish
Published Los Alamitos, CA IEEE 01.05.2008
IEEE Computer Society
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN0162-8828
2160-9292
1939-3539
DOI10.1109/TPAMI.2007.70738

Cover

Abstract A dynamic texture is a spatio-temporal generative model for video, which represents video sequences as observations from a linear dynamical system. This work studies the mixture of dynamic textures, a statistical model for an ensemble of video sequences that is sampled from a finite collection of visual processes, each of which is a dynamic texture. An expectation-maximization (EM) algorithm is derived for learning the parameters of the model, and the model is related to previous works in linear systems, machine learning, time- series clustering, control theory, and computer vision. Through experimentation, it is shown that the mixture of dynamic textures is a suitable representation for both the appearance and dynamics of a variety of visual processes that have traditionally been challenging for computer vision (for example, fire, steam, water, vehicle and pedestrian traffic, and so forth). When compared with state-of-the-art methods in motion segmentation, including both temporal texture methods and traditional representations (for example, optical flow or other localized motion representations), the mixture of dynamic textures achieves superior performance in the problems of clustering and segmenting video of such processes.
AbstractList A dynamic texture is a spatio-temporal generative model for video, which represents video sequences as observations from a linear dynamical system. This work studies the mixture of dynamic textures, a statistical model for an ensemble of video sequences that is sampled from a finite collection of visual processes, each of which is a dynamic texture. An expectation-maximization (EM) algorithm is derived for learning the parameters of the model, and the model is related to previous works in linear systems, machine learning, time- series clustering, control theory, and computer vision. Through experimentation, it is shown that the mixture of dynamic textures is a suitable representation for both the appearance and dynamics of a variety of visual processes that have traditionally been challenging for computer vision (for example, fire, steam, water, vehicle and pedestrian traffic, and so forth). When compared with state-of-the-art methods in motion segmentation, including both temporal texture methods and traditional representations (for example, optical flow or other localized motion representations), the mixture of dynamic textures achieves superior performance in the problems of clustering and segmenting video of such processes.
A dynamic texture is a spatio-temporal generative model for video, which represents video sequences as observations from a linear dynamical system. This work studies the mixture of dynamic textures, a statistical model for an ensemble of video sequences that is sampled from a finite collection of visual processes, each of which is a dynamic texture. An expectationmaximization (EM) algorithm is derived for learning the parameters of the model, and the model is related to previous works in linear systems, machine learning, time-series clustering, control theory, and computer vision. Through experimentation, it is shown that the mixture of dynamic textures is a suitable representation for both the appearance and dynamics of a variety of visual processes that have traditionally been challenging for computer vision (e.g. fire, steam, water, vehicle and pedestrian traffic, etc.). When compared with state-of-the-art methods in motion segmentation, including both temporal texture methods and traditional representations (e.g. optical flow or other localized motion representations), the mixture of dynamic textures achieves superior performance in the problems of clustering and segmenting video of such processes.A dynamic texture is a spatio-temporal generative model for video, which represents video sequences as observations from a linear dynamical system. This work studies the mixture of dynamic textures, a statistical model for an ensemble of video sequences that is sampled from a finite collection of visual processes, each of which is a dynamic texture. An expectationmaximization (EM) algorithm is derived for learning the parameters of the model, and the model is related to previous works in linear systems, machine learning, time-series clustering, control theory, and computer vision. Through experimentation, it is shown that the mixture of dynamic textures is a suitable representation for both the appearance and dynamics of a variety of visual processes that have traditionally been challenging for computer vision (e.g. fire, steam, water, vehicle and pedestrian traffic, etc.). When compared with state-of-the-art methods in motion segmentation, including both temporal texture methods and traditional representations (e.g. optical flow or other localized motion representations), the mixture of dynamic textures achieves superior performance in the problems of clustering and segmenting video of such processes.
A dynamic texture is a spatio-temporal generative model for video, which represents video sequences as observations from a linear dynamical system. This work studies the mixture of dynamic textures, a statistical model for an ensemble [abstract truncated by publisher].
An expectation-maximization (EM) algorithm is derived for learning the parameters of the model, and the model is related to previous works in linear systems, machine learning, time- series clustering, control theory, and computer vision.
A dynamic texture is a spatio-temporal generative model for video, which represents video sequences as observations from a linear dynamical system. This work studies the mixture of dynamic textures, a statistical model for an ensemble of video sequences that is sampled from a finite collection of visual processes, each of which is a dynamic texture. An expectationmaximization (EM) algorithm is derived for learning the parameters of the model, and the model is related to previous works in linear systems, machine learning, time-series clustering, control theory, and computer vision. Through experimentation, it is shown that the mixture of dynamic textures is a suitable representation for both the appearance and dynamics of a variety of visual processes that have traditionally been challenging for computer vision (e.g. fire, steam, water, vehicle and pedestrian traffic, etc.). When compared with state-of-the-art methods in motion segmentation, including both temporal texture methods and traditional representations (e.g. optical flow or other localized motion representations), the mixture of dynamic textures achieves superior performance in the problems of clustering and segmenting video of such processes.
Author Chan, A.B.
Vasconcelos, N.
Author_xml – sequence: 1
  givenname: A.B.
  surname: Chan
  fullname: Chan, A.B.
  organization: Univ. of California at San Diego, La Jolla
– sequence: 2
  givenname: N.
  surname: Vasconcelos
  fullname: Vasconcelos, N.
  organization: Univ. of California at San Diego, La Jolla
BackLink http://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&idt=20245844$$DView record in Pascal Francis
https://www.ncbi.nlm.nih.gov/pubmed/18369258$$D View this record in MEDLINE/PubMed
BookMark eNqF0s1v0zAUAHALDbGucEdCQhEScCHl2Y5j-ziVr0mtQKJwtRz7ZXhKk2Engv33eG3pYRLs5K_f87Oe3xk56YceCXlKYUEp6LebL-friwUDkAsJkqsHZMZoDaVmmp2QGdCalUoxdUrOUroCoJUA_oicUsVrzYSakdV68NiF_vJNseymNGLczW3vi694ucV-zOvie_A4FL_C-KNYh9_jFDEVQ1u8u-ntNrhig_u9x-Rha7uETw7jnHz78H6z_FSuPn-8WJ6vSlfVeiwlMFtVXiP3AmXjrbVUtpJqAaKBVtiGeoeeOq48VExRrmTrW6doo0U-4XPyen_vdRx-TphGsw3JYdfZHocpGQ28Zrke4l6ppIBaSMmyfPVfKaGquBbiXsizk4rRDF_cgVfDFPtcGKNqxnNiKTN6fkBTs0VvrmPY2nhj_v5QBi8PwCZnuzba3oV0dAxYJVROOSf13rk4pBSxNS6MdgxDP0YbOkPB3LaM2bWMuW0Zs2uZHAh3Ao9v-HfIs31IQMQjr7jQueT8D5EVyYc
CODEN ITPIDJ
CitedBy_id crossref_primary_10_1016_j_cviu_2012_09_002
crossref_primary_10_1016_j_eswa_2012_12_092
crossref_primary_10_1109_TCSVT_2016_2592322
crossref_primary_10_1109_TCSVT_2018_2885211
crossref_primary_10_17671_gazibtd_419205
crossref_primary_10_1109_TCSVT_2012_2203199
crossref_primary_10_1016_j_jfranklin_2017_04_013
crossref_primary_10_4304_jcp_8_5_1292_1297
crossref_primary_10_1002_aisy_202300706
crossref_primary_10_1007_s10489_023_04639_9
crossref_primary_10_17706_IJCEE_2015_7_5_316_324
crossref_primary_10_3390_rs14164110
crossref_primary_10_1007_s00521_018_3527_9
crossref_primary_10_1109_TIP_2015_2479561
crossref_primary_10_1155_2017_2580860
crossref_primary_10_1109_TCSVT_2014_2308616
crossref_primary_10_1109_TMM_2018_2832601
crossref_primary_10_1002_ima_20184
crossref_primary_10_1016_j_eswa_2019_05_055
crossref_primary_10_1109_ACCESS_2017_2733219
crossref_primary_10_1007_s10044_015_0459_1
crossref_primary_10_1109_TKDE_2021_3094997
crossref_primary_10_1016_j_ijcce_2024_07_006
crossref_primary_10_1016_j_measurement_2024_114459
crossref_primary_10_1016_j_patrec_2024_12_007
crossref_primary_10_1007_s11045_022_00826_y
crossref_primary_10_1007_s11831_018_09305_9
crossref_primary_10_1109_TCSVT_2015_2489418
crossref_primary_10_1016_j_cviu_2014_10_001
crossref_primary_10_1109_ACCESS_2019_2904712
crossref_primary_10_1016_j_neucom_2017_02_058
crossref_primary_10_1109_ACCESS_2024_3488797
crossref_primary_10_1109_ACCESS_2023_3293537
crossref_primary_10_1016_j_cviu_2014_07_008
crossref_primary_10_1145_3092690
crossref_primary_10_1016_j_bica_2016_09_006
crossref_primary_10_1016_j_dsp_2019_03_017
crossref_primary_10_1109_TIP_2016_2598653
crossref_primary_10_1109_TPAMI_2013_111
crossref_primary_10_1145_3481299
crossref_primary_10_1007_s10851_015_0563_2
crossref_primary_10_1109_TNNLS_2020_3027667
crossref_primary_10_1515_amcs_2017_0013
crossref_primary_10_1016_j_patcog_2017_06_020
crossref_primary_10_1117_1_JEI_27_5_053044
crossref_primary_10_1016_j_neucom_2024_128673
crossref_primary_10_1109_ACCESS_2024_3374383
crossref_primary_10_1109_TCSVT_2014_2358029
crossref_primary_10_1109_TMM_2016_2598091
crossref_primary_10_1007_s11554_013_0368_8
crossref_primary_10_1016_j_neucom_2017_02_023
crossref_primary_10_1142_S021800142451011X
crossref_primary_10_1088_1742_6596_1921_1_012074
crossref_primary_10_1109_TCSVT_2009_2026932
crossref_primary_10_1109_LSP_2020_3025688
crossref_primary_10_1109_TITS_2014_2371333
crossref_primary_10_1109_TNNLS_2018_2851979
crossref_primary_10_1016_j_media_2012_02_002
crossref_primary_10_1016_j_patrec_2022_11_008
crossref_primary_10_1080_19479832_2021_1900408
crossref_primary_10_1016_j_cviu_2019_102882
crossref_primary_10_1109_TASL_2013_2279318
crossref_primary_10_3390_agriengineering2040039
crossref_primary_10_1007_s00138_010_0262_3
crossref_primary_10_1016_j_neucom_2016_09_138
crossref_primary_10_1109_TFUZZ_2013_2240689
crossref_primary_10_2139_ssrn_4112841
crossref_primary_10_1109_ACCESS_2017_2708838
crossref_primary_10_1109_TIP_2012_2226899
crossref_primary_10_1109_TCSVT_2016_2539878
crossref_primary_10_1109_TIP_2018_2875353
crossref_primary_10_1109_TNNLS_2020_3011717
crossref_primary_10_1137_120872048
crossref_primary_10_1016_j_ins_2016_04_049
crossref_primary_10_1007_s11263_014_0735_3
crossref_primary_10_1109_TMM_2020_2997202
crossref_primary_10_1016_j_cviu_2008_08_009
crossref_primary_10_3390_s22124441
crossref_primary_10_1007_s11042_015_2548_y
crossref_primary_10_1109_TPAMI_2010_226
crossref_primary_10_1109_TASL_2010_2090148
crossref_primary_10_1109_TIP_2012_2210234
crossref_primary_10_1016_j_patrec_2012_06_011
crossref_primary_10_1007_s00530_019_00629_5
crossref_primary_10_1016_j_autcon_2022_104167
crossref_primary_10_1007_s11042_014_2219_4
crossref_primary_10_1109_TMI_2019_2946059
crossref_primary_10_1016_j_patcog_2016_03_031
crossref_primary_10_1145_3487892
crossref_primary_10_1007_s11760_013_0532_4
crossref_primary_10_1016_j_patcog_2010_10_021
crossref_primary_10_1109_TASL_2009_2036306
crossref_primary_10_1049_iet_ipr_2016_0044
crossref_primary_10_3390_computers2020088
crossref_primary_10_3390_su17010041
crossref_primary_10_1007_s13177_023_00373_1
crossref_primary_10_1007_s11263_016_0918_1
crossref_primary_10_3390_app9132670
crossref_primary_10_1109_TIP_2011_2179055
crossref_primary_10_1016_j_dsp_2021_103030
crossref_primary_10_1109_TPAMI_2014_2359432
crossref_primary_10_1016_j_patrec_2013_10_002
crossref_primary_10_3390_mi13010072
crossref_primary_10_1016_j_cviu_2023_103653
crossref_primary_10_1007_s11263_015_0864_3
crossref_primary_10_1016_j_idm_2016_07_001
crossref_primary_10_1109_TIP_2016_2579307
crossref_primary_10_1162_COMJ_a_00375
crossref_primary_10_3390_math10203856
crossref_primary_10_3724_SP_J_1016_2010_01835
crossref_primary_10_1109_TPAMI_2014_2300484
crossref_primary_10_1016_j_cosrev_2024_100686
crossref_primary_10_1109_ACCESS_2020_3003993
crossref_primary_10_1109_TPAMI_2009_21
crossref_primary_10_1016_j_neucom_2023_126561
crossref_primary_10_1109_TIP_2019_2922818
crossref_primary_10_1109_MCE_2020_3029769
crossref_primary_10_1016_j_heliyon_2024_e25360
crossref_primary_10_2197_ipsjjip_19_190
crossref_primary_10_1109_TCSVT_2013_2276151
crossref_primary_10_1109_TIP_2014_2300811
crossref_primary_10_1007_s13369_022_07092_x
crossref_primary_10_1109_TPAMI_2011_52
crossref_primary_10_32604_cmc_2022_028570
crossref_primary_10_1109_TIP_2022_3204217
crossref_primary_10_1109_TPAMI_2012_236
crossref_primary_10_1016_j_patcog_2023_109335
crossref_primary_10_1109_TITS_2016_2634580
crossref_primary_10_3390_info13010002
crossref_primary_10_1016_j_eswa_2020_113333
crossref_primary_10_1109_TPAMI_2009_110
crossref_primary_10_1007_s00332_018_9470_1
crossref_primary_10_1109_TIP_2011_2172800
crossref_primary_10_1016_j_patcog_2012_03_010
crossref_primary_10_1109_TSMCB_2012_2192267
crossref_primary_10_1109_TCSVT_2011_2159430
crossref_primary_10_1016_j_neucom_2015_12_070
crossref_primary_10_1109_ACCESS_2024_3435144
crossref_primary_10_1016_j_ijdrr_2017_02_021
crossref_primary_10_1016_j_mlwa_2021_100023
crossref_primary_10_1109_LGRS_2021_3072191
crossref_primary_10_1016_j_neucom_2017_05_045
crossref_primary_10_1016_j_automatica_2018_06_022
crossref_primary_10_1109_TNNLS_2023_3274611
crossref_primary_10_1117_1_3518080
crossref_primary_10_1016_j_cviu_2013_04_006
crossref_primary_10_1016_j_patcog_2018_07_006
crossref_primary_10_1109_TIP_2020_2985284
crossref_primary_10_18178_joig_7_4_112_116
crossref_primary_10_1109_TCDS_2022_3230858
crossref_primary_10_1109_TCSVT_2017_2731866
crossref_primary_10_1002_rob_21989
crossref_primary_10_1109_TIP_2015_2462029
crossref_primary_10_1109_TNNLS_2020_2979049
crossref_primary_10_1007_s12065_022_00713_2
crossref_primary_10_1007_s00371_018_1499_5
crossref_primary_10_1016_j_neucom_2018_08_085
crossref_primary_10_1109_TPAMI_2020_3040591
crossref_primary_10_1109_TITS_2016_2516586
crossref_primary_10_1117_1_OE_57_4_043109
crossref_primary_10_1109_JSEN_2014_2381260
crossref_primary_10_3390_s20185073
crossref_primary_10_1016_j_compind_2018_02_007
crossref_primary_10_1016_j_cviu_2016_04_003
crossref_primary_10_1109_TMM_2016_2542585
crossref_primary_10_1049_iet_cvi_2019_0455
crossref_primary_10_1109_TGRS_2019_2903794
crossref_primary_10_1109_TIP_2013_2258353
crossref_primary_10_1016_j_neucom_2014_11_034
crossref_primary_10_1109_TII_2019_2916671
crossref_primary_10_32604_iasc_2022_029535
crossref_primary_10_1016_j_jpdc_2017_03_002
crossref_primary_10_1109_ACCESS_2018_2872733
crossref_primary_10_1109_JIOT_2024_3376457
crossref_primary_10_1109_TAFFC_2015_2404352
crossref_primary_10_3724_SP_J_1004_2012_00742
crossref_primary_10_1121_1_4751535
crossref_primary_10_1080_13682199_2020_1767843
crossref_primary_10_1049_iet_ipr_2016_0994
crossref_primary_10_1109_TIP_2015_2447738
crossref_primary_10_1109_TCSVT_2021_3061153
crossref_primary_10_1016_j_cviu_2020_103040
crossref_primary_10_1016_j_media_2013_05_003
crossref_primary_10_1109_TCSVT_2015_2511479
crossref_primary_10_1109_TCSVT_2017_2713480
crossref_primary_10_1109_TNNLS_2015_2435653
crossref_primary_10_1016_j_procs_2019_08_203
crossref_primary_10_1007_s13369_017_2995_z
crossref_primary_10_1177_0278364915583881
Cites_doi 10.1109/83.334981
10.1016/j.patcog.2005.01.025
10.1109/34.868677
10.1109/ICCV.1998.710707
10.1023/A:1021669406132
10.1109/34.868688
10.1023/A:1008078328650
10.1109/tpami.2009.110
10.1109/TCS.1983.1085288
10.1080/01621459.1991.10475107
10.1109/CVPR.1997.609375
10.1109/ACV.1994.341288
10.1109/PROC.1976.10289
10.1007/3-540-55426-2_33
10.1109/CVPR.1999.786972
10.1109/CVPR.1999.784983
10.1109/ICCV.2001.937584
10.1016/0304-4076(94)90036-1
10.1016/0004-3702(81)90024-2
10.1162/089976699300016674
10.1109/CVPR.2001.990925
10.1109/CVPR.1993.341161
10.1109/34.1000236
10.1109/89.242489
10.1109/TPAMI.2002.1023800
10.1109/CVPR.2005.279
10.1080/01621459.1998.10474114
10.1109/TSP.2004.831125
10.1007/978-3-540-70932-9_11
10.1007/978-1-4615-3236-1_1
10.1109/ICCV.2005.151
10.1109/ICCV.2001.937658
10.1109/ICCV.2005.135
10.1109/CVPR.2005.263
10.1109/TPAMI.2003.1195991
10.1016/0005-1098(94)90230-5
10.1109/9.554398
10.1111/j.1467-9892.2005.00441.x
10.1007/978-1-4757-3502-4
10.1109/ICCV.2003.1238632
10.1007/BF01908075
10.1162/089976600300015619
10.1109/34.908972
10.1007/BF01420984
10.1007/978-3-540-70932-9_10
10.1111/j.1467-9892.1982.tb00349.x
10.1109/TAC.1965.1098191
10.1016/j.patcog.2003.12.018
10.2307/2984875
10.1109/cvpr.2003.1211367
10.1109/34.531801
10.1109/iccv.1998.710861
10.1109/ACC.2002.1024543
ContentType Journal Article
Copyright 2008 INIST-CNRS
Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2008
Copyright_xml – notice: 2008 INIST-CNRS
– notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2008
DBID 97E
RIA
RIE
AAYXX
CITATION
IQODW
CGR
CUY
CVF
ECM
EIF
NPM
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
F28
FR3
7X8
DOI 10.1109/TPAMI.2007.70738
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Pascal-Francis
Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
ANTE: Abstracts in New Technology & Engineering
Engineering Research Database
MEDLINE - Academic
DatabaseTitle CrossRef
MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
Engineering Research Database
ANTE: Abstracts in New Technology & Engineering
MEDLINE - Academic
DatabaseTitleList Technology Research Database
MEDLINE - Academic
Technology Research Database
Technology Research Database

MEDLINE
Technology Research Database
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: EIF
  name: MEDLINE
  url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search
  sourceTypes: Index Database
– sequence: 3
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Computer Science
Applied Sciences
EISSN 2160-9292
1939-3539
EndPage 926
ExternalDocumentID 2322975071
18369258
20245844
10_1109_TPAMI_2007_70738
4359353
Genre orig-research
Evaluation Studies
Research Support, U.S. Gov't, Non-P.H.S
Journal Article
GroupedDBID ---
-DZ
-~X
.DC
0R~
29I
4.4
53G
5GY
5VS
6IK
97E
9M8
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABFSI
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
ACNCT
ADRHT
AENEX
AETEA
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
E.L
EBS
EJD
F5P
FA8
HZ~
H~9
IBMZZ
ICLAB
IEDLZ
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
MS~
O9-
OCL
P2P
PQQKQ
RIA
RIE
RNI
RNS
RXW
RZB
TAE
TN5
UHB
VH1
XJT
~02
AAYXX
CITATION
IQODW
RIG
CGR
CUY
CVF
ECM
EIF
NPM
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
F28
FR3
7X8
ID FETCH-LOGICAL-c469t-702a44d9e3d5e7bdaaa17f719505b0f5ab1dced1c38d04281387fdfc81b95dce3
IEDL.DBID RIE
ISSN 0162-8828
IngestDate Thu Oct 02 11:01:56 EDT 2025
Wed Oct 01 14:24:29 EDT 2025
Mon Sep 29 05:01:15 EDT 2025
Sat Sep 27 19:55:06 EDT 2025
Mon Jun 30 06:36:07 EDT 2025
Mon Jul 21 05:59:10 EDT 2025
Mon Jul 21 09:13:35 EDT 2025
Wed Oct 01 06:41:52 EDT 2025
Thu Apr 24 22:51:11 EDT 2025
Tue Aug 26 16:47:22 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 5
Keywords video clustering
temporal textures
video modeling
Kalman filter
Dynamic texture
motion segmentation
time-series clustering
mixture models
expectation-maximization
linear dynamical systems
probabilistic models
Cluster analysis
Parameter estimation
Video signal
Mixture theory
Linear time
Computer control
Dynamical system
Modeling
Texture
Image sequence
Fires
Classification
Dynamic model
Pattern analysis
Computer vision
Statistical analysis
Probabilistic approach
Motion estimation
Computer theory
Time series
Pedestrian traffic
Water vapor
Image segmentation
Road traffic
Control theory
EM algorithm
Artificial intelligence
Optical flow
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
CC BY 4.0
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c469t-702a44d9e3d5e7bdaaa17f719505b0f5ab1dced1c38d04281387fdfc81b95dce3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ObjectType-Article-2
ObjectType-Feature-1
content type line 23
ObjectType-Undefined-1
ObjectType-Feature-3
PMID 18369258
PQID 862350677
PQPubID 23500
PageCount 18
ParticipantIDs crossref_citationtrail_10_1109_TPAMI_2007_70738
crossref_primary_10_1109_TPAMI_2007_70738
pascalfrancis_primary_20245844
proquest_miscellaneous_875065772
proquest_miscellaneous_34437821
pubmed_primary_18369258
proquest_miscellaneous_903620735
proquest_journals_862350677
proquest_miscellaneous_70443955
ieee_primary_4359353
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2008-05-01
PublicationDateYYYYMMDD 2008-05-01
PublicationDate_xml – month: 05
  year: 2008
  text: 2008-05-01
  day: 01
PublicationDecade 2000
PublicationPlace Los Alamitos, CA
PublicationPlace_xml – name: Los Alamitos, CA
– name: United States
– name: New York
PublicationTitle IEEE transactions on pattern analysis and machine intelligence
PublicationTitleAbbrev TPAMI
PublicationTitleAlternate IEEE Trans Pattern Anal Mach Intell
PublicationYear 2008
Publisher IEEE
IEEE Computer Society
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: IEEE Computer Society
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref57
ref12
ref56
ref15
ref59
ref14
ref53
ref52
ref11
ref55
ref10
ref54
(ref58) 2008
ref17
ref16
ref19
ref18
Kay (ref23) 1993
ref51
Horn (ref1) 1986
ref46
Young (ref29) 2006
ref45
ref48
ref47
ref42
ref44
ref43
Lucas (ref3)
ref8
ref7
ref9
ref4
ref6
ref5
ref40
ref34
ref37
ref31
ref30
ref33
ref32
ref2
ref39
ref38
(ref60) 2008
Duda (ref50) 2001
Ghahramani (ref36) 1997
ref24
ref26
ref25
Forsyth (ref49) 2002
ref20
ref64
ref63
ref22
ref21
ref28
ref27
Ghahramani (ref35) 1996
Pavlović (ref41) 2000
ref62
ref61
References_xml – ident: ref6
  doi: 10.1109/83.334981
– ident: ref45
  doi: 10.1016/j.patcog.2005.01.025
– start-page: 121
  volume-title: Proc. DARPA Image Understanding Workshop
  ident: ref3
  article-title: An Iterative Image Registration Technique with an Application to Stereo Vision
– ident: ref51
  doi: 10.1109/34.868677
– ident: ref39
  doi: 10.1109/ICCV.1998.710707
– ident: ref13
  doi: 10.1023/A:1021669406132
– volume-title: Fundamentals of Statistical Signal Processing: Estimation Theory
  year: 1993
  ident: ref23
– volume-title: Washington State Dept. of Transportation
  year: 2008
  ident: ref60
– ident: ref61
  doi: 10.1109/34.868688
– ident: ref9
  doi: 10.1023/A:1008078328650
– ident: ref19
  doi: 10.1109/tpami.2009.110
– ident: ref33
  doi: 10.1109/TCS.1983.1085288
– ident: ref37
  doi: 10.1080/01621459.1991.10475107
– volume-title: Pattern Classification
  year: 2001
  ident: ref50
– volume-title: Technical Report CRG-TR-96-1, Dept. of Computer Science, Univ. of Toronto
  year: 1997
  ident: ref36
  article-title: The EM Algorithm for Mixtures of Factor Analyzers
– ident: ref53
  doi: 10.1109/CVPR.1997.609375
– ident: ref8
  doi: 10.1109/ACV.1994.341288
– ident: ref31
  doi: 10.1109/PROC.1976.10289
– volume-title: The HTK Book
  year: 2006
  ident: ref29
– ident: ref10
  doi: 10.1007/3-540-55426-2_33
– ident: ref55
  doi: 10.1109/CVPR.1999.786972
– ident: ref40
  doi: 10.1109/CVPR.1999.784983
– ident: ref14
  doi: 10.1109/ICCV.2001.937584
– ident: ref42
  doi: 10.1016/0304-4076(94)90036-1
– ident: ref2
  doi: 10.1016/0004-3702(81)90024-2
– volume-title: Technical Report CRG-TR-96-2, Dept. of Computer Science, Univ. of Toronto
  year: 1996
  ident: ref35
  article-title: Parameter Estimation for Linear Dynamical Systems
– ident: ref24
  doi: 10.1162/089976699300016674
– ident: ref16
  doi: 10.1109/CVPR.2001.990925
– ident: ref52
  doi: 10.1109/CVPR.1993.341161
– ident: ref63
  doi: 10.1109/34.1000236
– ident: ref34
  doi: 10.1109/89.242489
– ident: ref56
  doi: 10.1109/TPAMI.2002.1023800
– ident: ref17
  doi: 10.1109/CVPR.2005.279
– volume-title: Advances in Neural Information Processing Systems 13
  year: 2000
  ident: ref41
  article-title: Learning Switching Linear Models of Human Motion
– ident: ref46
  doi: 10.1080/01621459.1998.10474114
– ident: ref57
  doi: 10.1109/TSP.2004.831125
– ident: ref21
  doi: 10.1007/978-3-540-70932-9_11
– ident: ref5
  doi: 10.1007/978-1-4615-3236-1_1
– ident: ref20
  doi: 10.1109/ICCV.2005.151
– ident: ref12
  doi: 10.1109/ICCV.2001.937658
– ident: ref43
  doi: 10.1109/ICCV.2005.135
– ident: ref18
  doi: 10.1109/CVPR.2005.263
– ident: ref11
  doi: 10.1109/TPAMI.2003.1195991
– ident: ref26
  doi: 10.1016/0005-1098(94)90230-5
– ident: ref32
  doi: 10.1109/9.554398
– volume-title: Computer Vision: A Modern Approach
  year: 2002
  ident: ref49
– ident: ref64
  doi: 10.1111/j.1467-9892.2005.00441.x
– ident: ref27
  doi: 10.1007/978-1-4757-3502-4
– ident: ref15
  doi: 10.1109/ICCV.2003.1238632
– ident: ref59
  doi: 10.1007/BF01908075
– ident: ref44
  doi: 10.1162/089976600300015619
– ident: ref54
  doi: 10.1109/34.908972
– ident: ref4
  doi: 10.1007/BF01420984
– ident: ref22
  doi: 10.1007/978-3-540-70932-9_10
– year: 2008
  ident: ref58
  article-title: Mixtures of Dynamic Textures
– ident: ref25
  doi: 10.1111/j.1467-9892.1982.tb00349.x
– ident: ref30
  doi: 10.1109/TAC.1965.1098191
– ident: ref48
  doi: 10.1016/j.patcog.2003.12.018
– ident: ref28
  doi: 10.2307/2984875
– ident: ref38
  doi: 10.1109/cvpr.2003.1211367
– ident: ref7
  doi: 10.1109/34.531801
– volume-title: Robot Vision
  year: 1986
  ident: ref1
– ident: ref62
  doi: 10.1109/iccv.1998.710861
– ident: ref47
  doi: 10.1109/ACC.2002.1024543
SSID ssj0014503
Score 2.4640746
Snippet A dynamic texture is a spatio-temporal generative model for video, which represents video sequences as observations from a linear dynamical system. This work...
An expectation-maximization (EM) algorithm is derived for learning the parameters of the model, and the model is related to previous works in linear systems,...
SourceID proquest
pubmed
pascalfrancis
crossref
ieee
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 909
SubjectTerms Algorithms
Applied sciences
Artificial Intelligence
Cluster Analysis
Clustering
Clustering algorithms
Computer science; control theory; systems
Computer Simulation
Computer vision
Control theory
Dynamic tests
Dynamic texture
Dynamical systems
Dynamics
Exact sciences and technology
expectation-maximization
Fires
Image Enhancement - methods
Image Interpretation, Computer-Assisted - methods
Information Storage and Retrieval - methods
Kalman filter
Likelihood Functions
linear dynamical systems
Linear systems
Machine learning
Machine learning algorithms
Marine vehicles
Mathematical models
mixture models
Models, Statistical
motion segmentation
Pattern Recognition, Automated - methods
Pattern recognition. Digital image processing. Computational geometry
probabilistic models
Representations
Reproducibility of Results
Sensitivity and Specificity
Studies
Surface layer
temporal textures
Texture
time-series clustering
Vehicle dynamics
video clustering
video modeling
Video Recording - methods
Video sequences
Title Modeling, Clustering, and Segmenting Video with Mixtures of Dynamic Textures
URI https://ieeexplore.ieee.org/document/4359353
https://www.ncbi.nlm.nih.gov/pubmed/18369258
https://www.proquest.com/docview/862350677
https://www.proquest.com/docview/34437821
https://www.proquest.com/docview/70443955
https://www.proquest.com/docview/875065772
https://www.proquest.com/docview/903620735
Volume 30
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 2160-9292
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014503
  issn: 0162-8828
  databaseCode: RIE
  dateStart: 19790101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB61PcGBQssjLRQfuCA1u0lsr-NjVagKYhESW9Rb5NiTakVJUHcjAb8ej_OgIBZxs-JJFHtm5BnP4wN4kfLUoTM6di7BWJgyiXMpZZyjwVJVMklMyPJ9Pzu_EG8v5eUWHI-1MIgYks9wQsMQy3eNbemqbOqPds0l34Ztlc-6Wq0xYiBkQEH2FozXcO9GDCHJRE8XH07mb7puhcpLdIDoy_lMZ4Tzfus0CvAqlBxpVn5_qg7YYrPlGU6gs12YD__eJZ58nrTrcmJ__NHW8X8Xdx_u9aYoO-lk5wFsYb0HuwPMA-u1fg_u3upZuA_vCD2NatiP2el1S20WwtjUjn3Eq5B8VF-xT0uHDaNLXjZffqMoxYo1FXv1vTZflpYtsHv2EC7OXi9Oz-MekiG23o9exyrJjBBOI3cSVemMMamqFGHJyjKppClTZ9GllueOvLGU56pylfXGsZZ-hj-Cnbqp8Qkw1EnmqBA2LyuhnNQELJ4pi1mWW-d0BNOBNYXt-5UTbMZ1EfyWRBeBr4SjqYrA1whejm987Xp1_IN2n1gw0vW7H8HRb9wf5zMKUedCRHA4iEPRa_uq8F4hl9SKL4Ln46xXU4q9mBqbdlVwIbg3xtLNFCrxJFrKCNgGCu9aeoPRu0ObSTQZJH59_iuPO1n9tRO9yB_8feWHcKdLhaFczqews75p8Zm3t9blUVC0n8EKJWM
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB6VcgAOFFoeodD6wAWp2XViu46PVaHawm6FxBb1Fjm2U61ok4rdSMCvx-M8KIhF3Kx4EsWeGXnG8_gAXicssc5qFVtLXcx1QeNMCBFnTrtCloJSHbJ8zw4n5_z9hbjYgIOhFsY5F5LP3AiHIZZva9PgVdnYH-2KCXYH7grOuWirtYaYARcBB9nbMF7HvSPRByWpGs8_Hs1O236F0st0AOnL2KFKEen91nkUAFYwPVIv_Q6VLbTFetsznEEnWzDr_75NPfkyalbFyPz4o7Hj_y7vETzsjFFy1ErPY9hw1TZs9UAPpNP7bXhwq2vhDkwRPw2r2A_I8VWDjRbCWFeWfHKXIf2ouiSfF9bVBK95yWzxDeMUS1KX5O33Sl8vDJm79tkTOD95Nz-exB0oQ2y8J72KJU0151Y5ZoWThdVaJ7KUiCYrCloKXSTWOJsYlln0xxKWydKWxpvHSvgZ9hQ2q7pyz4E4RVOLpbBZUXJphUJo8VQal6aZsVZFMO5Zk5uuYzkCZ1zlwXOhKg98RSRNmQe-RvBmeOOm7dbxD9odZMFA1-1-BHu_cX-YTzFInXEewW4vDnmn78vc-4VMYDO-CPaHWa-oGH3RlaubZc44Z94cS9ZTSOpJlBARkDUU3rn0JqN3iNaTKDRJ_Pr8V561svprJzqRf_H3le_Dvcl8Ns2np2cfduF-mxiDmZ0vYXP1tXGvvPW1KvaC0v0ENhYosA
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Modeling%2C+Clustering%2C+and+Segmenting+Video+with+Mixtures+of+Dynamic+Textures&rft.jtitle=IEEE+transactions+on+pattern+analysis+and+machine+intelligence&rft.au=Chan%2C+A.B&rft.au=Vasconcelos%2C+N&rft.date=2008-05-01&rft.pub=The+Institute+of+Electrical+and+Electronics+Engineers%2C+Inc.+%28IEEE%29&rft.issn=0162-8828&rft.eissn=1939-3539&rft.volume=30&rft.issue=5&rft.spage=909&rft_id=info:doi/10.1109%2FTPAMI.2007.70738&rft.externalDBID=NO_FULL_TEXT&rft.externalDocID=2322975071
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0162-8828&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0162-8828&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0162-8828&client=summon