Multilevel Depth and Image Fusion for Human Activity Detection
Recognizing complex human activities usually requires the detection and modeling of individual visual features and the interactions between them. Current methods only rely on the visual features extracted from 2-D images, and therefore often lead to unreliable salient visual feature detection and in...
Saved in:
| Published in | IEEE transactions on cybernetics Vol. 43; no. 5; pp. 1383 - 1394 |
|---|---|
| Main Authors | , , , |
| Format | Journal Article |
| Language | English |
| Published |
United States
IEEE
01.10.2013
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Subjects | |
| Online Access | Get full text |
| ISSN | 2168-2267 2168-2275 2168-2275 |
| DOI | 10.1109/TCYB.2013.2276433 |
Cover
| Abstract | Recognizing complex human activities usually requires the detection and modeling of individual visual features and the interactions between them. Current methods only rely on the visual features extracted from 2-D images, and therefore often lead to unreliable salient visual feature detection and inaccurate modeling of the interaction context between individual features. In this paper, we show that these problems can be addressed by combining data from a conventional camera and a depth sensor (e.g., Microsoft Kinect). We propose a novel complex activity recognition and localization framework that effectively fuses information from both grayscale and depth image channels at multiple levels of the video processing pipeline. In the individual visual feature detection level, depth-based filters are applied to the detected human/object rectangles to remove false detections. In the next level of interaction modeling, 3-D spatial and temporal contexts among human subjects or objects are extracted by integrating information from both grayscale and depth images. Depth information is also utilized to distinguish different types of indoor scenes. Finally, a latent structural model is developed to integrate the information from multiple levels of video processing for an activity detection. Extensive experiments on two activity recognition benchmarks (one with depth information) and a challenging grayscale + depth human activity database that contains complex interactions between human-human, human-object, and human-surroundings demonstrate the effectiveness of the proposed multilevel grayscale + depth fusion scheme. Higher recognition and localization accuracies are obtained relative to the previous methods. |
|---|---|
| AbstractList | Recognizing complex human activities usually requires the detection and modeling of individual visual features and the interactions between them. Current methods only rely on the visual features extracted from 2-D images, and therefore often lead to unreliable salient visual feature detection and inaccurate modeling of the interaction context between individual features. In this paper, we show that these problems can be addressed by combining data from a conventional camera and a depth sensor (e.g., Microsoft Kinect). We propose a novel complex activity recognition and localization framework that effectively fuses information from both grayscale and depth image channels at multiple levels of the video processing pipeline. In the individual visual feature detection level, depth-based filters are applied to the detected human/object rectangles to remove false detections. In the next level of interaction modeling, 3-D spatial and temporal contexts among human subjects or objects are extracted by integrating information from both grayscale and depth images. Depth information is also utilized to distinguish different types of indoor scenes. Finally, a latent structural model is developed to integrate the information from multiple levels of video processing for an activity detection. Extensive experiments on two activity recognition benchmarks (one with depth information) and a challenging grayscale + depth human activity database that contains complex interactions between human-human, human-object, and human-surroundings demonstrate the effectiveness of the proposed multilevel grayscale + depth fusion scheme. Higher recognition and localization accuracies are obtained relative to the previous methods. Recognizing complex human activities usually requires the detection and modeling of individual visual features and the interactions between them. Current methods only rely on the visual features extracted from 2-D images, and therefore often lead to unreliable salient visual feature detection and inaccurate modeling of the interaction context between individual features. In this paper, we show that these problems can be addressed by combining data from a conventional camera and a depth sensor (e.g., Microsoft Kinect). We propose a novel complex activity recognition and localization framework that effectively fuses information from both grayscale and depth image channels at multiple levels of the video processing pipeline. In the individual visual feature detection level, depth-based filters are applied to the detected human/object rectangles to remove false detections. In the next level of interaction modeling, 3-D spatial and temporal contexts among human subjects or objects are extracted by integrating information from both grayscale and depth images. Depth information is also utilized to distinguish different types of indoor scenes. Finally, a latent structural model is developed to integrate the information from multiple levels of video processing for an activity detection. Extensive experiments on two activity recognition benchmarks (one with depth information) and a challenging grayscale + depth human activity database that contains complex interactions between human-human, human-object, and human-surroundings demonstrate the effectiveness of the proposed multilevel grayscale + depth fusion scheme. Higher recognition and localization accuracies are obtained relative to the previous methods.Recognizing complex human activities usually requires the detection and modeling of individual visual features and the interactions between them. Current methods only rely on the visual features extracted from 2-D images, and therefore often lead to unreliable salient visual feature detection and inaccurate modeling of the interaction context between individual features. In this paper, we show that these problems can be addressed by combining data from a conventional camera and a depth sensor (e.g., Microsoft Kinect). We propose a novel complex activity recognition and localization framework that effectively fuses information from both grayscale and depth image channels at multiple levels of the video processing pipeline. In the individual visual feature detection level, depth-based filters are applied to the detected human/object rectangles to remove false detections. In the next level of interaction modeling, 3-D spatial and temporal contexts among human subjects or objects are extracted by integrating information from both grayscale and depth images. Depth information is also utilized to distinguish different types of indoor scenes. Finally, a latent structural model is developed to integrate the information from multiple levels of video processing for an activity detection. Extensive experiments on two activity recognition benchmarks (one with depth information) and a challenging grayscale + depth human activity database that contains complex interactions between human-human, human-object, and human-surroundings demonstrate the effectiveness of the proposed multilevel grayscale + depth fusion scheme. Higher recognition and localization accuracies are obtained relative to the previous methods. |
| Author | Bingbing Ni Shuicheng Yan Moulin, Pierre Yong Pei |
| Author_xml | – sequence: 1 surname: Bingbing Ni fullname: Bingbing Ni email: bingbing.ni@adsc.com.sg organization: Adv. Digital Sci. Center, Singapore, Singapore – sequence: 2 surname: Yong Pei fullname: Yong Pei email: pei.yong@adsc.com.sg organization: Adv. Digital Sci. Center, Singapore, Singapore – sequence: 3 givenname: Pierre surname: Moulin fullname: Moulin, Pierre email: moulin@ifp.uiuc.edu organization: Dept. of Electr. & Comput. Eng., Univ. of Illinois at Urbana-Champaign, Urbana, IL, USA – sequence: 4 surname: Shuicheng Yan fullname: Shuicheng Yan email: eleyans@nus.edu.sg organization: Dept. of Electr. & Comput. Eng., Nat. Univ. of Singapore, Singapore, Singapore |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/23996589$$D View this record in MEDLINE/PubMed |
| BookMark | eNqNkUtv1DAUhS1URB_0ByAkFIlNNzP1K7G9QSpTSisVsSkLVtaNY4Mrxxlip9X8exzNMIsRi3rje63vXJ17fIqO4hAtQu8IXhKC1eXD6ufnJcWELSkVDWfsFTqhpJGL0tZH-7oRx-g8pUdcjixPSr5Bx5Qp1dRSnaBP36aQfbBPNlTXdp1_VxC76q6HX7a6mZIfYuWGsbqdeojVlcn-yedNIbMt9RDfotcOQrLnu_sM_bj58rC6Xdx__3q3urpfGE7rvGCEK9s6AtjVLXHAWN0JJp2pgYCBums5NGCVdMS4jmPccUGkc6xjIMFwdobodu4U17B5hhD0evQ9jBtNsJ7z0NlsWj3noXd5FNHFVrQehz-TTVn3PhkbAkQ7TEkTzqVgWDb1C9BimQhGcEE_HqCPwzTGsv1MUSWpovPADztqanvb7d3-S74AYguYcUhptE4bn2HONI_gw36t-ZsP1yIHysMo_qd5v9V4a-2eL0YEFZT9BcuiruY |
| CODEN | ITCEB8 |
| CitedBy_id | crossref_primary_10_1007_s11042_020_08789_7 crossref_primary_10_1109_TSMC_2024_3356530 crossref_primary_10_1371_journal_pone_0114147 crossref_primary_10_1109_THMS_2018_2850301 crossref_primary_10_1109_THMS_2017_2759809 crossref_primary_10_3390_electronics10192412 crossref_primary_10_1186_s13640_018_0365_8 crossref_primary_10_1007_s11554_016_0660_5 crossref_primary_10_1109_TCYB_2015_2494877 crossref_primary_10_1142_S0218001421500026 crossref_primary_10_1016_j_neucom_2018_08_066 crossref_primary_10_1007_s11042_018_6875_7 crossref_primary_10_1109_TPAMI_2015_2513479 crossref_primary_10_3390_s20174944 crossref_primary_10_1016_j_neucom_2022_07_048 crossref_primary_10_3390_s16122171 crossref_primary_10_3390_s16101713 crossref_primary_10_3390_informatics8010002 crossref_primary_10_1016_j_image_2016_01_003 crossref_primary_10_3390_s17051100 crossref_primary_10_1007_s11045_018_0550_z crossref_primary_10_1109_TCYB_2016_2638856 crossref_primary_10_1109_TCYB_2021_3126831 crossref_primary_10_1109_TCYB_2021_3137753 crossref_primary_10_1016_j_imavis_2016_04_004 crossref_primary_10_1111_exsy_12096 crossref_primary_10_3390_s19040947 crossref_primary_10_1007_s00371_021_02064_y crossref_primary_10_3233_ICA_190599 crossref_primary_10_1007_s11042_019_7740_z crossref_primary_10_1007_s10462_021_10116_x crossref_primary_10_1016_j_neucom_2016_07_058 crossref_primary_10_3389_fnbot_2015_00003 crossref_primary_10_1109_TCYB_2018_2869902 crossref_primary_10_1016_j_ipm_2022_103113 crossref_primary_10_1049_iet_cvi_2017_0487 crossref_primary_10_1109_TMM_2021_3134565 crossref_primary_10_1109_JSEN_2017_2723599 crossref_primary_10_1016_j_patrec_2017_05_004 crossref_primary_10_1109_JSEN_2018_2839732 crossref_primary_10_1109_TCYB_2019_2960481 crossref_primary_10_1016_j_heliyon_2021_e07797 crossref_primary_10_1016_j_inffus_2019_07_005 crossref_primary_10_1007_s11042_020_08875_w crossref_primary_10_3390_electronics10141685 crossref_primary_10_1007_s00500_018_3364_x crossref_primary_10_14483_22487638_17413 crossref_primary_10_1109_TCYB_2016_2524406 crossref_primary_10_1016_j_jvcir_2016_05_006 crossref_primary_10_1109_THMS_2014_2377111 crossref_primary_10_1016_j_image_2018_06_013 crossref_primary_10_32628_IJSRSET2411221 crossref_primary_10_1016_j_patrec_2020_01_010 crossref_primary_10_1109_THMS_2015_2443037 crossref_primary_10_1007_s12369_018_0498_z crossref_primary_10_1016_j_imavis_2024_105205 crossref_primary_10_1109_ACCESS_2021_3132559 crossref_primary_10_1109_TCYB_2016_2539546 crossref_primary_10_1109_TCYB_2014_2350774 crossref_primary_10_1109_TCYB_2015_2485203 crossref_primary_10_1109_TCSVT_2019_2943010 |
| Cites_doi | 10.1109/ICCV.2003.1238378 10.1109/TPAMI.2009.167 10.1109/TPAMI.2012.67 10.5244/C.22.99 10.1109/VSPETS.2005.1570899 10.1109/TPAMI.2010.214 10.1109/34.910878 10.1109/CVPR.2011.5995407 10.1109/TPAMI.2007.70711 10.1109/CVPR.2010.5539883 10.1145/1961189.1961199 10.1109/CVPR.2007.383131 10.5244/C.20.127 10.1177/0278364913478446 10.1109/CVPR.2005.177 10.1109/TPAMI.2011.38 10.1109/CVPR.2008.4587628 10.1109/CVPR.2012.6247807 10.1109/CVPRW.2010.5543273 10.1109/TCYB.2013.2265378 10.1109/ICCVW.2011.6130379 10.1109/CVPR.2008.4587731 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) Oct 2013 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) Oct 2013 |
| DBID | 97E RIA RIE AAYXX CITATION CGR CUY CVF ECM EIF NPM 7SC 7SP 7TB 8FD F28 FR3 H8D JQ2 L7M L~C L~D 7X8 7U5 ADTOC UNPAY |
| DOI | 10.1109/TCYB.2013.2276433 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Medline MEDLINE MEDLINE (Ovid) MEDLINE MEDLINE PubMed Computer and Information Systems Abstracts Electronics & Communications Abstracts Mechanical & Transportation Engineering Abstracts Technology Research Database ANTE: Abstracts in New Technology & Engineering Engineering Research Database Aerospace Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic Solid State and Superconductivity Abstracts Unpaywall for CDI: Periodical Content Unpaywall |
| DatabaseTitle | CrossRef MEDLINE Medline Complete MEDLINE with Full Text PubMed MEDLINE (Ovid) Aerospace Database Technology Research Database Computer and Information Systems Abstracts – Academic Mechanical & Transportation Engineering Abstracts Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Engineering Research Database Advanced Technologies Database with Aerospace ANTE: Abstracts in New Technology & Engineering Computer and Information Systems Abstracts Professional MEDLINE - Academic Solid State and Superconductivity Abstracts |
| DatabaseTitleList | Aerospace Database MEDLINE - Academic Aerospace Database MEDLINE |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: EIF name: MEDLINE url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search sourceTypes: Index Database – sequence: 3 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher – sequence: 4 dbid: UNPAY name: Unpaywall url: https://proxy.k.utb.cz/login?url=https://unpaywall.org/ sourceTypes: Open Access Repository |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Sciences (General) |
| EISSN | 2168-2275 |
| EndPage | 1394 |
| ExternalDocumentID | oai:scholarbank.nus.edu.sg:10635/56712 3073300391 23996589 10_1109_TCYB_2013_2276433 6587272 |
| Genre | orig-research Research Support, Non-U.S. Gov't Journal Article |
| GroupedDBID | 0R~ 4.4 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACIWK AENEX AGQYO AGSQL AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ EBS EJD HZ~ IFIPE IPLJI JAVBF M43 O9- OCL PQQKQ RIA RIE RNS AAYXX CITATION CGR CUY CVF ECM EIF NPM RIG 7SC 7SP 7TB 8FD F28 FR3 H8D JQ2 L7M L~C L~D 7X8 7U5 ADTOC UNPAY |
| ID | FETCH-LOGICAL-c425t-3149ebf1a0f5b1fa335d738fc5a1aca5db4a6ae98f1cfd400d4718ff3d3a8ac43 |
| IEDL.DBID | RIE |
| ISSN | 2168-2267 2168-2275 |
| IngestDate | Tue Aug 26 13:28:12 EDT 2025 Sun Sep 28 00:08:38 EDT 2025 Sat Sep 27 16:27:04 EDT 2025 Sun Sep 07 03:42:50 EDT 2025 Thu Apr 03 06:52:34 EDT 2025 Wed Oct 01 05:14:30 EDT 2025 Thu Apr 24 23:02:23 EDT 2025 Tue Aug 26 16:43:10 EDT 2025 |
| IsDoiOpenAccess | false |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 5 |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c425t-3149ebf1a0f5b1fa335d738fc5a1aca5db4a6ae98f1cfd400d4718ff3d3a8ac43 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| OpenAccessLink | https://proxy.k.utb.cz/login?url=http://scholarbank.nus.edu.sg/handle/10635/56712 |
| PMID | 23996589 |
| PQID | 1432982925 |
| PQPubID | 85422 |
| PageCount | 12 |
| ParticipantIDs | proquest_miscellaneous_1448730865 crossref_citationtrail_10_1109_TCYB_2013_2276433 unpaywall_primary_10_1109_tcyb_2013_2276433 pubmed_primary_23996589 crossref_primary_10_1109_TCYB_2013_2276433 proquest_miscellaneous_1433517310 proquest_journals_1432982925 ieee_primary_6587272 |
| ProviderPackageCode | CITATION AAYXX |
| PublicationCentury | 2000 |
| PublicationDate | 2013-10-01 |
| PublicationDateYYYYMMDD | 2013-10-01 |
| PublicationDate_xml | – month: 10 year: 2013 text: 2013-10-01 day: 01 |
| PublicationDecade | 2010 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States – name: Piscataway |
| PublicationTitle | IEEE transactions on cybernetics |
| PublicationTitleAbbrev | TCYB |
| PublicationTitleAlternate | IEEE Trans Cybern |
| PublicationYear | 2013 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | ref13 ref15 ref14 ref31 ref10 ref2 lan (ref11) 2010 ref1 ref19 sung (ref18) 2011 liu (ref16) 2013 guo (ref4) 2010 yao (ref12) 2012; 34 wang (ref17) 2012 yang (ref27) 0 ref24 ref23 ref26 ref20 ref22 ref21 ref29 ref8 ref7 ref9 ref3 ref6 ref5 wolf (ref30) 2012 choi (ref28) 2009 (ref25) 0 |
| References_xml | – year: 2009 ident: ref28 article-title: What are they doing? Collective activity classification using spatio-temporal relationship among people publication-title: Proc IEEE Int Workshop Visual Surveilance – ident: ref6 doi: 10.1109/ICCV.2003.1238378 – year: 2012 ident: ref30 publication-title: The LIRIS human activities dataset and the ICPR human activities recognition and localization competition – start-page: 1290 year: 2012 ident: ref17 article-title: Mining actionlet ensemble for action recognition with depth cameras publication-title: Proc IEEE Int Conf Comput Vision Pattern Recognit – ident: ref20 doi: 10.1109/TPAMI.2009.167 – volume: 34 start-page: 1691 year: 2012 ident: ref12 article-title: Recognizing human?object interactions in still images by modeling the mutual context of objects and human poses publication-title: IEEE Trans Pattern Anal Mach Intell doi: 10.1109/TPAMI.2012.67 – ident: ref7 doi: 10.5244/C.22.99 – ident: ref8 doi: 10.1109/VSPETS.2005.1570899 – ident: ref24 doi: 10.1109/TPAMI.2010.214 – ident: ref3 doi: 10.1109/34.910878 – ident: ref9 doi: 10.1109/CVPR.2011.5995407 – year: 2013 ident: ref16 article-title: Learning discriminative representations from RGB-D video data publication-title: Proc Int Joint Conf Artif Intell – ident: ref5 doi: 10.1109/TPAMI.2007.70711 – ident: ref22 doi: 10.1109/CVPR.2010.5539883 – ident: ref23 doi: 10.1145/1961189.1961199 – ident: ref1 doi: 10.1109/CVPR.2007.383131 – ident: ref29 doi: 10.5244/C.20.127 – ident: ref26 doi: 10.1177/0278364913478446 – year: 2011 ident: ref18 article-title: Human activity detection from RGBD images publication-title: Proc AAAI Workshop Pattern Activity Intent Recognit – year: 2010 ident: ref4 article-title: Action recognition in video by sparse representation on covariance manifolds of silhouette tunnels publication-title: Proc IEEE Int Conf Pattern Recognit – year: 0 ident: ref25 – ident: ref19 doi: 10.1109/CVPR.2005.177 – start-page: 1216 year: 2010 ident: ref11 article-title: Beyond actions: Discriminative models for contextual group activities publication-title: Proc Adv Neural Inform Process Syst – ident: ref31 doi: 10.1109/TPAMI.2011.38 – year: 0 ident: ref27 article-title: Effective 3D action recognition using eigenjoints publication-title: J Visual Commun Image Representation – ident: ref21 doi: 10.1109/CVPR.2008.4587628 – ident: ref10 doi: 10.1109/CVPR.2012.6247807 – ident: ref14 doi: 10.1109/CVPRW.2010.5543273 – ident: ref13 doi: 10.1109/TCYB.2013.2265378 – ident: ref15 doi: 10.1109/ICCVW.2011.6130379 – ident: ref2 doi: 10.1109/CVPR.2008.4587731 |
| SSID | ssj0000816898 |
| Score | 2.355684 |
| Snippet | Recognizing complex human activities usually requires the detection and modeling of individual visual features and the interactions between them. Current... |
| SourceID | unpaywall proquest pubmed crossref ieee |
| SourceType | Open Access Repository Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 1383 |
| SubjectTerms | Accuracy Actigraphy - instrumentation Actigraphy - methods Action recognition and localization Algorithms Artificial Intelligence Computer Peripherals Computer Simulation Computer Systems Context modeling depth sensor Feature extraction Gray-scale Human motion Humans Image detection Image Enhancement - instrumentation Image Enhancement - methods Image processing Image recognition Imaging, Three-Dimensional - methods Joints Pattern Recognition, Automated - methods Position (location) Recognition spatial and temporal context Studies Subtraction Technique Transducers Video Video Games Visual Visualization Whole Body Imaging - instrumentation Whole Body Imaging - methods |
| SummonAdditionalLinks | – databaseName: Unpaywall dbid: UNPAY link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3db9QwDLem2wPwABuDcbChIPHAh3qXNkmveZl0DE4DiQmknbQ9VU7agLSjO3Gt0PjrcdpctWlogrdWddNEtuOfa8cGeGkTW5DVdhFBcR1Jy3mUOWUirrh1CgudFD6i-_k4PZrLT6fqdAPWTRCDR2ewOh9VTVdJcPVt3FUcIA0n8zhW6cS3Fd5MFaHvAWzOj79Mz3wPuTglpidt09hwPVEhkBlzPa7tpfG5XGJED8gQi2umqO2t8jeYeQ_uNNUSL3_hYnHF9MwewNf1AZ4u4-R81NRmZH_frOf4z6vagvsBh7JpJzjbsFFWD2E7aPqKvQrlqF_vwEF7Rnfhk4vY-3JZf2c0IPv4gzYiNmv8zzZGwJe10QA2tV03CqKs2ySv6hHMZx9ODo-i0HUhsqS_NW3KUpfGxciJZ7FDIVQxEZmzCmO0qAojMcVSZy62rqAtoPD2zTlRCMzQSvEYBtVFVT4BZqVJjROpJRgk0cebExRKxmhkYpGrIfA1A3IbSpL7zhiLvHVNuM5PDs_e5Z5neeDZEN70ryy7ehy3Ee94rvaEhLZ83HkIe2su50FjV-QCiURniU5oVi_6x6RrPoCCVXnRtDRCxRNCxLfRkAsoyFGkcXY7Ceq_788R0xz0EN72InVjFV5Mr63i6X9RP4O7_rbLN9yDQf2zKfcJN9XmeVCVPyf9DzI priority: 102 providerName: Unpaywall |
| Title | Multilevel Depth and Image Fusion for Human Activity Detection |
| URI | https://ieeexplore.ieee.org/document/6587272 https://www.ncbi.nlm.nih.gov/pubmed/23996589 https://www.proquest.com/docview/1432982925 https://www.proquest.com/docview/1433517310 https://www.proquest.com/docview/1448730865 http://scholarbank.nus.edu.sg/handle/10635/56712 |
| UnpaywallVersion | submittedVersion |
| Volume | 43 |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Electronic Library (IEL) customDbUrl: eissn: 2168-2275 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0000816898 issn: 2168-2267 databaseCode: RIE dateStart: 20130101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjR3LbtRAzGrLATgApTwWSjVIHHhlO5lHHpdKS2FVkFpx6ErtKZqnKnXJrmgiVL4ezyQb0VJV3CLFmYzH9tgee2yAN4YZi1rbJ2iKl4kwlCaFlzqhkhovlS2ZDRHdw6PsYCa-nciTNfg43IVxzsXkMzcOjzGWbxemDUdlu6gtQ9xwHdbzIuvuag3nKbGBRGx9y_AhQasi74OYKS13j_dPP4U8Lj5mLEclHNrnhFudOGJ5RSPFFis3WZv34W5bL9XlLzWf_6WBpg_hcDX3LvHkfNw2emx-Xyvr-L_IPYIHvSlKJh3vbMKaqx_DZi_sF-RtX5H63RbsxWu685BfRD67ZXNGVG3J1x-4F5FpG87bCNq-JAYEyMR0DSkQsol5XvUTmE2_HO8fJH3jhcSgCDe4L4vSaZ8qimRLveJc2pwX3kiVKqOk1UJlypWFT423uAvYoOK855arQhnBn8JGvajdcyBG6Ex7nhm0hIQKIWemuBSp0oIZReUI6GrxK9NXJQ_NMeZV9E5oWQXSVYF0VU-6EbwfPll2JTluA94KSz0A9qs8gu0VhateaC_QC-KsLFjJcFavh9cobiGGomq3aCMMl2mORvFtMOgFcvQVcZxnHfcM_18x3Qg-DOz0DxaNudRXsHhxMxYv4V6A6nILt2Gj-dm6V2gjNXonCscO3JkdfZ-c_gGN-wmh |
| linkProvider | IEEE |
| linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjR3LbhQxzCrlUDgApTwWCgSJA6_ZZvLYnbkglcJqC92etlI5jfIUEsvsis4Ila_HyWRHFKqKW6RxMnEcx3ZsxwAvDDMWpbbPUBUvM2EozQovdUYlNV4qWzIbPLqz49H0RHw6lacb8LbPhXHOxeAzNwzN6Mu3S9OGq7I9lJbBb3gNrkshhOyytfoblVhCIha_ZdjIUK8YJzdmTsu9-cGX9yGSiw8ZG6MYDgV0Ql4njllekEmxyMpl-uZN2GrrlTr_qRaLP2TQ5DbM1rPvQk--DdtGD82vvx52_F_07sCtpIyS_W73bMOGq-_CdmL3M_IyvUn9agfexUTdRYgwIh_cqvlKVG3J4Xc8jcikDTduBLVfEl0CZN90JSkQsomRXvU9OJl8nB9Ms1R6ITPIxA2ezKJ02ueKIuFyrziXdswLb6TKlVHSaqFGypWFz423eA7YIOS855arQhnB78NmvazdQyBG6JH2fGRQFxIqOJ2Z4lLkSgtmFJUDoOvFr0x6lzyUx1hU0T6hZRVIVwXSVYl0A3jdd1l1j3JcBbwTlroHTKs8gN01havEtmdoB3FWFqxkOKvn_WdkuOBFUbVbthGGy3yMavFVMGgHcrQWcZwH3e7p_7_edAN402-nf7BozLm-gMWjy7F4BlvT-eyoOjo8_vwYboQeXaThLmw2P1r3BDWmRj-NjPIbPksLPg |
| linkToUnpaywall | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3db9QwDLem2wPwABuDcbChIPHAh3qXNkmveZl0DE4DiQmknbQ9VU7agLSjO3Gt0PjrcdpctWlogrdWddNEtuOfa8cGeGkTW5DVdhFBcR1Jy3mUOWUirrh1CgudFD6i-_k4PZrLT6fqdAPWTRCDR2ewOh9VTVdJcPVt3FUcIA0n8zhW6cS3Fd5MFaHvAWzOj79Mz3wPuTglpidt09hwPVEhkBlzPa7tpfG5XGJED8gQi2umqO2t8jeYeQ_uNNUSL3_hYnHF9MwewNf1AZ4u4-R81NRmZH_frOf4z6vagvsBh7JpJzjbsFFWD2E7aPqKvQrlqF_vwEF7Rnfhk4vY-3JZf2c0IPv4gzYiNmv8zzZGwJe10QA2tV03CqKs2ySv6hHMZx9ODo-i0HUhsqS_NW3KUpfGxciJZ7FDIVQxEZmzCmO0qAojMcVSZy62rqAtoPD2zTlRCMzQSvEYBtVFVT4BZqVJjROpJRgk0cebExRKxmhkYpGrIfA1A3IbSpL7zhiLvHVNuM5PDs_e5Z5neeDZEN70ryy7ehy3Ee94rvaEhLZ83HkIe2su50FjV-QCiURniU5oVi_6x6RrPoCCVXnRtDRCxRNCxLfRkAsoyFGkcXY7Ceq_788R0xz0EN72InVjFV5Mr63i6X9RP4O7_rbLN9yDQf2zKfcJN9XmeVCVPyf9DzI |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Multilevel+Depth+and+Image+Fusion+for+Human+Activity+Detection&rft.jtitle=IEEE+transactions+on+cybernetics&rft.au=Bingbing+Ni&rft.au=Yong+Pei&rft.au=Moulin%2C+Pierre&rft.au=Shuicheng+Yan&rft.date=2013-10-01&rft.issn=2168-2267&rft.eissn=2168-2275&rft.volume=43&rft.issue=5&rft.spage=1383&rft.epage=1394&rft_id=info:doi/10.1109%2FTCYB.2013.2276433&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TCYB_2013_2276433 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2168-2267&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2168-2267&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2168-2267&client=summon |