Anticipating Human Activities Using Object Affordances for Reactive Robotic Response
An important aspect of human perception is anticipation, which we use extensively in our day-to-day activities when interacting with other humans as well as with our surroundings. Anticipating which activities will a human do next (and how) can enable an assistive robot to plan ahead for reactive re...
Saved in:
Published in | IEEE transactions on pattern analysis and machine intelligence Vol. 38; no. 1; pp. 14 - 29 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
United States
IEEE
01.01.2016
|
Subjects | |
Online Access | Get full text |
ISSN | 0162-8828 1939-3539 2160-9292 1939-3539 |
DOI | 10.1109/TPAMI.2015.2430335 |
Cover
Abstract | An important aspect of human perception is anticipation, which we use extensively in our day-to-day activities when interacting with other humans as well as with our surroundings. Anticipating which activities will a human do next (and how) can enable an assistive robot to plan ahead for reactive responses. Furthermore, anticipation can even improve the detection accuracy of past activities. The challenge, however, is two-fold: We need to capture the rich context for modeling the activities and object affordances, and we need to anticipate the distribution over a large space of future human activities. In this work, we represent each possible future using an anticipatory temporal conditional random field (ATCRF) that models the rich spatial-temporal relations through object affordances. We then consider each ATCRF as a particle and represent the distribution over the potential futures using a set of particles. In extensive evaluation on CAD-120 human activity RGB-D dataset, we first show that anticipation improves the state-of-the-art detection results. We then show that for new subjects (not seen in the training set), we obtain an activity anticipation accuracy (defined as whether one of top three predictions actually happened) of 84.1, 74.4 and 62.2 percent for an anticipation time of 1, 3 and 10 seconds respectively. Finally, we also show a robot using our algorithm for performing a few reactive responses. |
---|---|
AbstractList | An important aspect of human perception is anticipation, which we use extensively in our day-to-day activities when interacting with other humans as well as with our surroundings. Anticipating which activities will a human do next (and how) can enable an assistive robot to plan ahead for reactive responses. Furthermore, anticipation can even improve the detection accuracy of past activities. The challenge, however, is two-fold: We need to capture the rich context for modeling the activities and object affordances, and we need to anticipate the distribution over a large space of future human activities. In this work, we represent each possible future using an anticipatory temporal conditional random field (ATCRF) that models the rich spatial-temporal relations through object affordances. We then consider each ATCRF as a particle and represent the distribution over the potential futures using a set of particles. In extensive evaluation on CAD-120 human activity RGB-D dataset, we first show that anticipation improves the state-of-the-art detection results. We then show that for new subjects (not seen in the training set), we obtain an activity anticipation accuracy (defined as whether one of top three predictions actually happened) of 84.1, 74.4 and 62.2 percent for an anticipation time of 1, 3 and 10 seconds respectively. Finally, we also show a robot using our algorithm for performing a few reactive responses. An important aspect of human perception is anticipation, which we use extensively in our day-to-day activities when interacting with other humans as well as with our surroundings. Anticipating which activities will a human do next (and how) can enable an assistive robot to plan ahead for reactive responses. Furthermore, anticipation can even improve the detection accuracy of past activities. The challenge, however, is two-fold: We need to capture the rich context for modeling the activities and object affordances, and we need to anticipate the distribution over a large space of future human activities. In this work, we represent each possible future using an anticipatory temporal conditional random field (ATCRF) that models the rich spatial-temporal relations through object affordances. We then consider each ATCRF as a particle and represent the distribution over the potential futures using a set of particles. In extensive evaluation on CAD-120 human activity RGB-D dataset, we first show that anticipation improves the state-of-the-art detection results. We then show that for new subjects (not seen in the training set), we obtain an activity anticipation accuracy (defined as whether one of top three predictions actually happened) of 84.1, 74.4 and 62.2 percent for an anticipation time of 1, 3 and 10 seconds respectively. Finally, we also show a robot using our algorithm for performing a few reactive responses.An important aspect of human perception is anticipation, which we use extensively in our day-to-day activities when interacting with other humans as well as with our surroundings. Anticipating which activities will a human do next (and how) can enable an assistive robot to plan ahead for reactive responses. Furthermore, anticipation can even improve the detection accuracy of past activities. The challenge, however, is two-fold: We need to capture the rich context for modeling the activities and object affordances, and we need to anticipate the distribution over a large space of future human activities. In this work, we represent each possible future using an anticipatory temporal conditional random field (ATCRF) that models the rich spatial-temporal relations through object affordances. We then consider each ATCRF as a particle and represent the distribution over the potential futures using a set of particles. In extensive evaluation on CAD-120 human activity RGB-D dataset, we first show that anticipation improves the state-of-the-art detection results. We then show that for new subjects (not seen in the training set), we obtain an activity anticipation accuracy (defined as whether one of top three predictions actually happened) of 84.1, 74.4 and 62.2 percent for an anticipation time of 1, 3 and 10 seconds respectively. Finally, we also show a robot using our algorithm for performing a few reactive responses. |
Author | Koppula, Hema S. Saxena, Ashutosh |
Author_xml | – sequence: 1 givenname: Hema S. surname: Koppula fullname: Koppula, Hema S. email: hema@cs.cornell.edu organization: Comput. Sci. Dept., Cornell Univ., New York, NY, USA – sequence: 2 givenname: Ashutosh surname: Saxena fullname: Saxena, Ashutosh email: asaxena@cs.cornell.edu organization: Comput. Sci. Dept., Cornell Univ., New York, NY, USA |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/26656575$$D View this record in MEDLINE/PubMed |
BookMark | eNp9kMFKAzEQhoNUtK2-gILs0cvWTLLJZo9LUSsoitRzyKazEmmTukkF396trR48eJph5vv_wzciAx88EnIGdAJAq6v5U_1wN2EUxIQVnHIuDsiQgaR5xSo2IEMKkuVKMXVMRjG-UQqFoPyIHDMphRSlGJJ57ZOzbm2S86_ZbLMyPqttch8uOYzZS9yeH5s3tCmr2zZ0C-Nt_-i37BnNlsTsOTShb-kPcR18xBNy2JplxNP9HJOXm-v5dJbfP97eTev73HJVpdw0FTKpCiysRaYkbRSDomwNGGNbBEuRKxCCSw6s4a1RireVFYXhC9Mww8fkcte77sL7BmPSKxctLpfGY9hEDWVRSVAA0KMXe3TTrHCh151bme5T_5joAbUDbBdi7LDV1qXeSvCpM26pgeqtdP0tXW-l6730Psr-RH_a_w2d70IOEX8DJVBWCuBfMyCNOg |
CODEN | ITPIDJ |
CitedBy_id | crossref_primary_10_1007_s13748_019_00177_z crossref_primary_10_1016_j_neucom_2021_01_018 crossref_primary_10_1016_j_neucom_2022_08_008 crossref_primary_10_1109_TCE_2023_3321324 crossref_primary_10_1016_j_ergon_2021_103241 crossref_primary_10_1016_j_robot_2018_10_005 crossref_primary_10_1109_TNNLS_2023_3325633 crossref_primary_10_1109_TMM_2023_3271811 crossref_primary_10_1515_pjbr_2018_0021 crossref_primary_10_1007_s10846_021_01312_6 crossref_primary_10_3389_frobt_2020_00047 crossref_primary_10_1109_ACCESS_2019_2958608 crossref_primary_10_3390_robotics9020040 crossref_primary_10_1016_j_patrec_2017_01_003 crossref_primary_10_26599_TST_2021_9010068 crossref_primary_10_1007_s41315_022_00240_4 crossref_primary_10_1109_LRA_2017_2719763 crossref_primary_10_1109_LRA_2017_2719762 crossref_primary_10_1108_IJCHM_10_2021_1284 crossref_primary_10_1109_ACCESS_2021_3074419 crossref_primary_10_1145_3570731 crossref_primary_10_1109_TCSVT_2020_2965574 crossref_primary_10_1007_s12369_023_01020_1 crossref_primary_10_1016_j_patcog_2020_107807 crossref_primary_10_1109_TPAMI_2020_2976971 crossref_primary_10_1016_j_rcim_2024_102886 crossref_primary_10_1007_s11831_023_10033_y crossref_primary_10_1109_LRA_2022_3211493 crossref_primary_10_1109_TPAMI_2018_2871688 crossref_primary_10_7717_peerj_cs_1396 crossref_primary_10_1109_TIP_2021_3113114 crossref_primary_10_1109_TPAMI_2022_3204808 crossref_primary_10_1016_j_engappai_2024_108937 crossref_primary_10_1007_s11263_023_01850_6 crossref_primary_10_1515_auto_2022_0006 crossref_primary_10_1016_j_rcim_2021_102231 crossref_primary_10_4018_IJAPUC_2019100104 crossref_primary_10_1007_s10514_018_9706_9 crossref_primary_10_1109_TCSVT_2023_3259430 crossref_primary_10_3233_ICA_190599 crossref_primary_10_1109_ACCESS_2021_3090471 crossref_primary_10_1007_s10462_021_10116_x crossref_primary_10_1016_j_jvcir_2022_103640 crossref_primary_10_1109_ACCESS_2024_3457912 crossref_primary_10_1007_s10514_021_10023_8 crossref_primary_10_1016_j_ins_2022_10_024 crossref_primary_10_1109_TIE_2023_3288182 crossref_primary_10_1016_j_imavis_2022_104452 crossref_primary_10_3390_electronics10182198 crossref_primary_10_1109_TIP_2020_3040521 crossref_primary_10_1007_s41693_024_00132_y crossref_primary_10_1007_s00521_019_04412_5 crossref_primary_10_3390_electronics11091471 crossref_primary_10_1016_j_neunet_2023_01_019 crossref_primary_10_1109_TPAMI_2020_2992889 crossref_primary_10_1109_TRO_2020_2992987 crossref_primary_10_1109_TIP_2016_2613686 crossref_primary_10_1007_s11370_020_00332_9 crossref_primary_10_1109_LRA_2018_2865034 crossref_primary_10_1109_TSMC_2017_2787482 crossref_primary_10_1007_s11042_020_08875_w crossref_primary_10_1016_j_rcim_2021_102304 crossref_primary_10_1109_TAI_2022_3194869 crossref_primary_10_1016_j_asoc_2023_110575 crossref_primary_10_1080_01691864_2017_1394912 crossref_primary_10_1587_transinf_2023PCP0001 crossref_primary_10_1109_TPAMI_2021_3139918 crossref_primary_10_1109_TIP_2020_2974061 crossref_primary_10_1109_TCDS_2016_2552579 crossref_primary_10_3156_jsoft_30_5_675 crossref_primary_10_7210_jrsj_36_327 crossref_primary_10_1016_j_sigpro_2022_108714 crossref_primary_10_1109_TPAMI_2017_2759736 crossref_primary_10_1109_TPAMI_2021_3059923 crossref_primary_10_1109_ACCESS_2019_2956538 crossref_primary_10_1007_s11263_019_01234_9 crossref_primary_10_1109_ACCESS_2021_3096032 crossref_primary_10_1145_3460199 crossref_primary_10_3390_app9091869 crossref_primary_10_1007_s00530_024_01282_3 crossref_primary_10_1109_TMM_2022_3142413 crossref_primary_10_35193_bseufbd_592099 crossref_primary_10_1017_S0033291720000835 crossref_primary_10_1109_TCDS_2016_2614992 crossref_primary_10_1109_TASE_2023_3300821 crossref_primary_10_1109_ACCESS_2020_3045994 crossref_primary_10_1109_LRA_2020_3043167 crossref_primary_10_1109_ACCESS_2021_3050991 crossref_primary_10_1145_3476412 crossref_primary_10_1109_TMM_2019_2963621 crossref_primary_10_3390_jimaging6060046 crossref_primary_10_1007_s10489_021_02418_y crossref_primary_10_17341_gazimmfd_541677 crossref_primary_10_1109_ACCESS_2018_2886133 crossref_primary_10_1016_j_robot_2022_104294 crossref_primary_10_3390_app142311315 crossref_primary_10_1142_S1793351X22490010 crossref_primary_10_1016_j_autcon_2023_105184 crossref_primary_10_1016_j_knosys_2023_110855 crossref_primary_10_3390_s23125653 crossref_primary_10_1177_0278364918812981 crossref_primary_10_1007_s00530_024_01604_5 crossref_primary_10_1007_s11263_024_02095_7 crossref_primary_10_1007_s12369_021_00842_1 crossref_primary_10_1145_3446370 crossref_primary_10_1145_3655025 crossref_primary_10_1016_j_cviu_2021_103252 crossref_primary_10_4018_IJACI_2019040103 crossref_primary_10_1109_TPAMI_2024_3393730 crossref_primary_10_1007_s11263_019_01245_6 crossref_primary_10_1016_j_patcog_2019_05_020 crossref_primary_10_1016_j_procir_2018_09_069 crossref_primary_10_1007_s11370_021_00403_5 crossref_primary_10_1109_TCSVT_2019_2912988 crossref_primary_10_1016_j_robot_2019_03_005 crossref_primary_10_1109_LRA_2018_2861569 crossref_primary_10_1016_j_neucom_2022_09_130 crossref_primary_10_1007_s11263_022_01594_9 crossref_primary_10_1109_TPAMI_2022_3218596 crossref_primary_10_1016_j_cviu_2023_103763 crossref_primary_10_1145_3579359 crossref_primary_10_1109_TCSVT_2022_3163782 crossref_primary_10_1007_s10846_018_0815_7 crossref_primary_10_1016_j_imavis_2024_104926 crossref_primary_10_1016_j_procir_2019_03_162 crossref_primary_10_1109_TMM_2020_3003783 crossref_primary_10_1016_j_asoc_2024_112126 crossref_primary_10_1109_TPAMI_2018_2873794 crossref_primary_10_1109_COMST_2023_3246993 crossref_primary_10_1109_TASE_2017_2707129 crossref_primary_10_1007_s11263_018_1103_5 crossref_primary_10_1016_j_procir_2018_01_033 crossref_primary_10_1016_j_rcim_2021_102310 crossref_primary_10_1109_LRA_2019_2949221 crossref_primary_10_3389_frobt_2018_00027 crossref_primary_10_1109_TCSVT_2023_3307655 crossref_primary_10_1016_j_oceaneng_2021_109154 crossref_primary_10_1109_LRA_2017_2662064 crossref_primary_10_3390_electronics12153305 crossref_primary_10_3390_s23094258 crossref_primary_10_1109_TPAMI_2018_2863279 crossref_primary_10_1016_j_jvcir_2017_10_004 crossref_primary_10_26599_AIR_2024_9150032 crossref_primary_10_3390_signals2030037 crossref_primary_10_1109_ACCESS_2023_3331687 crossref_primary_10_1145_3687474 crossref_primary_10_1049_iet_cvi_2017_0487 crossref_primary_10_1145_3450410 crossref_primary_10_1109_LRA_2018_2808367 crossref_primary_10_3390_s23218989 crossref_primary_10_1109_LRA_2024_3421848 crossref_primary_10_1109_TCSVT_2020_2987141 crossref_primary_10_1109_TPAMI_2020_3045007 crossref_primary_10_1016_j_patrec_2025_02_014 crossref_primary_10_1007_s10514_017_9655_8 crossref_primary_10_1109_TITS_2022_3155613 crossref_primary_10_1016_j_neucom_2022_02_045 crossref_primary_10_1016_j_cag_2023_09_013 crossref_primary_10_3389_frobt_2017_00024 crossref_primary_10_1016_j_eswa_2024_125297 crossref_primary_10_1007_s00371_019_01692_9 crossref_primary_10_1007_s00170_024_13735_0 crossref_primary_10_1109_ACCESS_2020_2963933 crossref_primary_10_1109_TCSVT_2024_3425598 crossref_primary_10_1109_TMM_2021_3137745 crossref_primary_10_1038_s41598_020_60923_5 crossref_primary_10_1109_TMM_2023_3326289 crossref_primary_10_1016_j_image_2018_06_013 crossref_primary_10_3389_frobt_2020_00038 crossref_primary_10_1007_s11263_021_01531_2 crossref_primary_10_1109_TIP_2020_3021497 crossref_primary_10_1109_TIP_2023_3279991 crossref_primary_10_1016_j_jvcir_2018_08_001 crossref_primary_10_1109_LRA_2022_3188892 crossref_primary_10_1109_TIP_2024_3414935 crossref_primary_10_1111_exsy_12552 crossref_primary_10_1007_s11042_017_5593_x crossref_primary_10_1007_s11263_017_0992_z crossref_primary_10_1109_MTS_2018_2795095 crossref_primary_10_4018_IJIRR_2019040102 crossref_primary_10_1002_cav_1958 crossref_primary_10_1007_s12369_020_00650_z crossref_primary_10_1016_j_ssci_2020_105130 crossref_primary_10_1109_ACCESS_2020_3046763 crossref_primary_10_1016_j_neucom_2024_127285 crossref_primary_10_3390_ai5030048 crossref_primary_10_3389_fncom_2022_1051222 |
Cites_doi | 10.1016/j.cviu.2010.08.002 10.1109/CVPR.2011.5995631 10.1109/CVPR.2012.6247808 10.1109/CVPR.2012.6247801 10.1109/ICCV.2005.59 10.1109/CVPR.2011.5995470 10.1109/CVPR.2010.5539998 10.1109/ICCV.2007.4409011 10.1109/CVPR.2010.5540235 10.15607/RSS.2012.VIII.055 10.1109/CVPR.2013.8 10.1109/WMVC.2007.12 10.1023/B:VISI.0000022288.19776.77 10.1007/s10994-009-5108-8 10.1109/TPAMI.2005.223 10.1109/CVPR.2013.322 10.1109/CVPR.2009.5206801 10.1111/j.1467-9876.2007.00592.x 10.1177/0278364909356602 10.1109/ICCV.2011.6126349 10.1109/ICCV.2009.5459279 10.1109/MSPEC.2012.6309254 10.1109/CVPR.2011.5995353 10.1109/CVPR.2010.5539879 10.1109/ICCVW.2011.6130379 10.1109/ROBOT.2000.846409 10.15607/RSS.2012.VIII.010 10.1109/CVPR.2013.221 10.1145/2330163.2330297 10.1109/CVPR.2011.5995646 10.1109/TPAMI.2009.83 10.1109/ICCV.2009.5459426 10.1109/ICSMC.2011.6083808 10.1109/CVPR.2009.5206821 10.1109/CVPR.2011.5995448 10.1109/CVPR.2007.383074 10.1109/CVPR.2012.6248010 10.1177/0278364913478446 10.1109/CVPR.2013.385 10.1007/s11263-010-0384-0 10.15607/RSS.2012.VIII.025 10.1145/1273496.1273629 10.1137/1.9781611972788.59 10.1109/ROBOT.2002.1013439 10.1109/CVPR.2011.5995327 10.1109/IROS.2011.6094489 10.1109/78.978396 10.1109/TPAMI.2007.1124 10.1145/1922649.1922653 10.1109/CVPR.2012.6247806 10.1109/CVPR.2008.4587756 10.1109/IROS.2009.5354147 10.1109/HRI.2013.6483499 10.1109/TPAMI.2012.241 10.1007/s11263-007-0062-z 10.1109/CVPR.2007.383203 10.1109/CVPR.2012.6248012 10.1007/BF02612354 |
ContentType | Journal Article |
DBID | 97E RIA RIE AAYXX CITATION CGR CUY CVF ECM EIF NPM 7X8 |
DOI | 10.1109/TPAMI.2015.2430335 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Medline MEDLINE MEDLINE (Ovid) MEDLINE MEDLINE PubMed MEDLINE - Academic |
DatabaseTitle | CrossRef MEDLINE Medline Complete MEDLINE with Full Text PubMed MEDLINE (Ovid) MEDLINE - Academic |
DatabaseTitleList | MEDLINE - Academic MEDLINE |
Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: EIF name: MEDLINE url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search sourceTypes: Index Database – sequence: 3 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering Computer Science |
EISSN | 2160-9292 1939-3539 |
EndPage | 29 |
ExternalDocumentID | 26656575 10_1109_TPAMI_2015_2430335 7102751 |
Genre | orig-research Research Support, U.S. Gov't, Non-P.H.S Research Support, Non-U.S. Gov't Journal Article |
GrantInformation_xml | – fundername: US National Science Foundation (NSF) funderid: 10.13039/100000001 – fundername: ARO grantid: W911NF-12-1-0267 funderid: 10.13039/100000183 |
GroupedDBID | --- -DZ -~X .DC 0R~ 29I 4.4 53G 5GY 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK ACNCT AENEX AGQYO AGSQL AHBIQ AKQYR ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 E.L EBS EJD F5P HZ~ IEDLZ IFIPE IPLJI JAVBF LAI M43 MS~ O9- OCL P2P PQQKQ RIA RIE RNS RXW TAE TN5 UHB ~02 AAYXX CITATION RIG 5VS 9M8 ABFSI ADRHT AETEA AETIX AI. AIBXA AKJIK ALLEH CGR CUY CVF ECM EIF FA8 H~9 IBMZZ ICLAB IFJZH NPM RNI RZB VH1 XJT 7X8 |
ID | FETCH-LOGICAL-c389t-ab9e2684e4cce2860b82147fa1aacfe1c0e3815536312b3fa883f9c54a3dab2a3 |
IEDL.DBID | RIE |
ISSN | 0162-8828 1939-3539 |
IngestDate | Sun Sep 28 02:57:08 EDT 2025 Mon Jul 21 05:57:09 EDT 2025 Tue Jul 01 03:18:22 EDT 2025 Thu Apr 24 23:07:33 EDT 2025 Tue Aug 26 16:42:45 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 1 |
Keywords | human activity anticipation robotics perception RGBD Data machine learning 3D activity understanding |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c389t-ab9e2684e4cce2860b82147fa1aacfe1c0e3815536312b3fa883f9c54a3dab2a3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
PMID | 26656575 |
PQID | 1749618111 |
PQPubID | 23479 |
PageCount | 16 |
ParticipantIDs | proquest_miscellaneous_1749618111 ieee_primary_7102751 pubmed_primary_26656575 crossref_citationtrail_10_1109_TPAMI_2015_2430335 crossref_primary_10_1109_TPAMI_2015_2430335 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2016-Jan.-1 2016-1-1 2016-Jan 20160101 |
PublicationDateYYYYMMDD | 2016-01-01 |
PublicationDate_xml | – month: 01 year: 2016 text: 2016-Jan.-1 day: 01 |
PublicationDecade | 2010 |
PublicationPlace | United States |
PublicationPlace_xml | – name: United States |
PublicationTitle | IEEE transactions on pattern analysis and machine intelligence |
PublicationTitleAbbrev | TPAMI |
PublicationTitleAlternate | IEEE Trans Pattern Anal Mach Intell |
PublicationYear | 2016 |
Publisher | IEEE |
Publisher_xml | – name: IEEE |
References | ref57 ref13 ref56 ref12 (ref68) 0 ref59 ref15 ref52 ref11 diankov (ref83) 2010 delaitre (ref75) 0 ref54 ref17 ref16 ref19 doucet (ref65) 0 ref18 fu (ref21) 0 fox (ref43) 0 montemerlo (ref67) 0 hongeng (ref53) 0 fox (ref62) 2001 ref51 ref50 ref46 ref48 ref47 sarawagi (ref58) 0 ref42 ref44 ref8 ref7 harchaoui (ref41) 0 ref9 ref4 ref3 ref5 koppula (ref30) 0 ref82 ref81 ref40 hermans (ref72) 2011 ref80 ref79 ref35 ref34 ref37 tran (ref29) 0 ref31 ref74 ref77 ref33 ref76 ref32 satkin (ref36) 0 ref2 ref1 ref39 ref38 kitani (ref49) 0 gibson (ref10) 1979 ref71 ref70 hoai (ref45) 0 ref73 schulz (ref64) 0 ref24 ref23 ref26 sung (ref6) 0 ref69 ref25 ref63 ref66 rohrbach (ref20) 0 ref22 ref27 jiang (ref28) 0 koppula (ref78) 0 ref60 ref61 niebles (ref14) 0 gong (ref55) 0 |
References_xml | – ident: ref70 doi: 10.1016/j.cviu.2010.08.002 – ident: ref11 doi: 10.1109/CVPR.2011.5995631 – year: 2011 ident: ref72 article-title: Affordance prediction via learned object attributes – ident: ref1 doi: 10.1109/CVPR.2012.6247808 – ident: ref2 doi: 10.1109/CVPR.2012.6247801 – start-page: 520 year: 0 ident: ref45 article-title: Maximum margin temporal clustering publication-title: Proc Int Conf Artif Intell Statist – start-page: 1185 year: 0 ident: ref58 article-title: Semi-Markov conditional random fields for information extraction publication-title: Proc Neural Inform Process Syst – ident: ref57 doi: 10.1109/ICCV.2005.59 – ident: ref44 doi: 10.1109/CVPR.2011.5995470 – ident: ref39 doi: 10.1109/CVPR.2010.5539998 – year: 1979 ident: ref10 publication-title: The Ecological Approach to Visual Perception – start-page: 201 year: 0 ident: ref49 article-title: Activity forecasting publication-title: Proc Eur Conf Comput Vis – ident: ref37 doi: 10.1109/ICCV.2007.4409011 – start-page: 536 year: 0 ident: ref36 article-title: Modeling the temporal extent of actions publication-title: Proc Eur Conf Comput Vis – ident: ref27 doi: 10.1109/CVPR.2010.5540235 – ident: ref51 doi: 10.15607/RSS.2012.VIII.055 – ident: ref31 doi: 10.1109/CVPR.2013.8 – ident: ref54 doi: 10.1109/WMVC.2007.12 – ident: ref81 doi: 10.1023/B:VISI.0000022288.19776.77 – ident: ref82 doi: 10.1007/s10994-009-5108-8 – ident: ref60 doi: 10.1109/TPAMI.2005.223 – start-page: 176 year: 0 ident: ref65 article-title: Rao-blackwellised particle filtering for dynamic Bayesian networks publication-title: Proc 16th Conf Uncertainty Artif Intell – start-page: 530 year: 0 ident: ref21 article-title: Attribute learning for understanding unstructured social activity publication-title: Proc Eur Conf Comput Vis – ident: ref46 doi: 10.1109/CVPR.2013.322 – start-page: 457 year: 0 ident: ref43 article-title: Nonparametric Bayesian learning of switching linear dynamical systems publication-title: Proc Neural Info Process Syst – start-page: 284 year: 0 ident: ref75 article-title: Scene semantics from long-term observation of people publication-title: Proc Eur Conf Comput Vis – ident: ref61 doi: 10.1109/CVPR.2009.5206801 – ident: ref77 doi: 10.1111/j.1467-9876.2007.00592.x – start-page: 1455 year: 0 ident: ref53 article-title: Large-scale event detection using semi-hidden Markov models publication-title: Proc Int Conf Comput Vis – year: 2001 ident: ref62 article-title: Kld-sampling: Adaptive particle filters – ident: ref71 doi: 10.1177/0278364909356602 – start-page: 609 year: 0 ident: ref41 article-title: Kernel change-point analysis publication-title: Proc Neural Inform Process Syst – ident: ref22 doi: 10.1109/ICCV.2011.6126349 – ident: ref35 doi: 10.1109/ICCV.2009.5459279 – start-page: 593 year: 0 ident: ref67 article-title: Fastslam: A factored solution to the simultaneous localization and mapping problem publication-title: Proc AAAI Nat Conf Artif Intell – start-page: 610 year: 0 ident: ref29 article-title: Event modeling and recognition using Markov logic networks publication-title: Proc 10th Eur Conf Comput Vis – start-page: 392 year: 0 ident: ref14 article-title: Modeling temporal structure of decomposable motion segments for activity classification publication-title: Proc Eur Conf Comput Vis – start-page: 1543 year: 0 ident: ref28 article-title: Learning object arrangements in 3d scenes using human context publication-title: Proc 29th Int Conf Mach Learn – ident: ref8 doi: 10.1109/MSPEC.2012.6309254 – ident: ref18 doi: 10.1109/CVPR.2011.5995353 – ident: ref25 doi: 10.1109/CVPR.2010.5539879 – ident: ref7 doi: 10.1109/ICCVW.2011.6130379 – ident: ref63 doi: 10.1109/ROBOT.2000.846409 – start-page: 244 year: 0 ident: ref78 article-title: Semantic labeling of 3d point clouds for indoor scenes publication-title: Proc Neural Inform Process Syst – ident: ref52 doi: 10.15607/RSS.2012.VIII.010 – ident: ref47 doi: 10.1109/CVPR.2013.221 – ident: ref33 doi: 10.1145/2330163.2330297 – ident: ref15 doi: 10.1109/CVPR.2011.5995646 – ident: ref26 doi: 10.1109/TPAMI.2009.83 – year: 2010 ident: ref83 article-title: Automated construction of robotic manipulation programs – ident: ref38 doi: 10.1109/ICCV.2009.5459426 – ident: ref4 doi: 10.1109/ICSMC.2011.6083808 – ident: ref17 doi: 10.1109/CVPR.2009.5206821 – ident: ref74 doi: 10.1109/CVPR.2011.5995448 – start-page: 244 year: 0 ident: ref30 article-title: Semantic labeling of 3d point clouds for indoor scenes publication-title: Proc Neural Info Process Syst – ident: ref16 doi: 10.1109/CVPR.2007.383074 – ident: ref3 doi: 10.1109/CVPR.2012.6248010 – ident: ref5 doi: 10.1177/0278364913478446 – ident: ref76 doi: 10.1109/CVPR.2013.385 – ident: ref59 doi: 10.1007/s11263-010-0384-0 – ident: ref50 doi: 10.15607/RSS.2012.VIII.025 – start-page: 742 year: 0 ident: ref55 article-title: Recognition of group activities using dynamic probabilistic networks publication-title: Proc Int Conf Comput Vis – ident: ref40 doi: 10.1145/1273496.1273629 – ident: ref12 doi: 10.1137/1.9781611972788.59 – start-page: 842 year: 0 ident: ref6 article-title: Unstructured human activity detection from RGBD images publication-title: Proc IEEE Int Conf Robot Autom – ident: ref69 doi: 10.1109/ROBOT.2002.1013439 – ident: ref73 doi: 10.1109/CVPR.2011.5995327 – ident: ref34 doi: 10.1109/IROS.2011.6094489 – ident: ref66 doi: 10.1109/78.978396 – ident: ref56 doi: 10.1109/TPAMI.2007.1124 – ident: ref24 doi: 10.1145/1922649.1922653 – ident: ref19 doi: 10.1109/CVPR.2012.6247806 – ident: ref13 doi: 10.1109/CVPR.2008.4587756 – start-page: 921 year: 0 ident: ref64 article-title: People tracking with anonymous and ID-sensors using Rao-Blackwellised particle filters publication-title: Proc Int Joint Conf Artif Intell – year: 0 ident: ref68 – ident: ref48 doi: 10.1109/IROS.2009.5354147 – ident: ref9 doi: 10.1109/HRI.2013.6483499 – ident: ref32 doi: 10.1109/TPAMI.2012.241 – ident: ref42 doi: 10.1007/s11263-007-0062-z – ident: ref79 doi: 10.1109/CVPR.2007.383203 – start-page: 144 year: 0 ident: ref20 article-title: Script data for attribute-based recognition of composite activities publication-title: Proc Eur Conf Comput Vis – ident: ref23 doi: 10.1109/CVPR.2012.6248012 – ident: ref80 doi: 10.1007/BF02612354 |
SSID | ssj0014503 |
Score | 2.6568267 |
Snippet | An important aspect of human perception is anticipation, which we use extensively in our day-to-day activities when interacting with other humans as well as... |
SourceID | proquest pubmed crossref ieee |
SourceType | Aggregation Database Index Database Enrichment Source Publisher |
StartPage | 14 |
SubjectTerms | 3D Activity Understanding Algorithms Anticipation, Psychological Context Context modeling Heating Hidden Markov models Human Activities Human Activity Anticipation Humans Imaging, Three-Dimensional Machine Learning Models, Statistical Movement Perception RGBD Data Robotics Robotics Perception Robots Trajectory Videos |
Title | Anticipating Human Activities Using Object Affordances for Reactive Robotic Response |
URI | https://ieeexplore.ieee.org/document/7102751 https://www.ncbi.nlm.nih.gov/pubmed/26656575 https://www.proquest.com/docview/1749618111 |
Volume | 38 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LS8QwEB5WT3rwsb7WFxG8ade-0sexiKLCqsgK3kqSnXpQWtHdi7_emfSBioq3EJq0ZSaZ93wAh4mKY0y1dNTERSeMJTraeNqJYiTh7UWRts2qR9fRxX149SAfenDc1cIgok0-wyEPbSx_UpkZu8pOWBrGXC89R2xW12p1EYNQWhRk0mDohJMZ0RbIuOnJ-DYbXXIWlxz6IV3ZAcPVkGDiiJ_8Io8swMrvuqaVOefLMGq_tk41eRrOpnpo3r81cvzv76zAUqN8iqzmllXoYdmH5RbYQTTnvA-Ln7oUrsE4K9vU6_JRWK-_yIxFnSAzW9ikA3Gj2aEjsoKT5ZmT3gSNxB0qe6GKu0pXtAtN2JxcXIf787Px6YXTgDE4hnSaqaN0itwZBkNj0E8iVycMcVQoTylToGdcJOEvZRAFnq-DQiVJUKRGhiqYKO2rYAPmy6rELRAy8WkqjkhZ9Ll9mipUQFpUTKoGSjLHBuC1JMlN06mcATOec2uxuGluKZozRfOGogM46ta81H06_nx6jcnRPdlQYgAHLeVzOmUcOlElVrO3nOw2hsYhwTCAzZolusUtJ23_vOkOLNCrG7fNLsxPX2e4R4rMVO9bDv4AK9_rFA |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1JSywxEC7Ed1AP7su8RSN4057pLb0cG3kyLqMiI3hrkky1B6VbdObir39V6QWfqHhrQhLSVCW11wdwkKg4xlRLR01cdMJYoqONp50oRhLeXhRp26x6dBkNb8OzO3k3B0ddLQwi2uQz7POnjeVPKjNjV9mApWHM9dI_JFkVSV2t1cUMQmlxkEmHoTtOhkRbIuOmg_F1NjrlPC7Z90N6tAMGrCHRxDE_-Z9EshArn2ubVuqcrMCoPW-dbPLQn01137y-a-X43R9aheVG_RRZzS9rMIflOqy00A6iuenrsPSmT-EGjLOyTb4u74X1-4vMWNwJMrSFTTsQV5pdOiIrOF2eeelF0Je4QWWfVHFT6Yp2oQGblYubcHvyd3w8dBo4BseQVjN1lE6Re8NgaAz6SeTqhEGOCuUpZQr0jIsk_qUMosDzdVCoJAmK1MhQBROlfRVswXxZlbgDQiY-DcURqYs-N1BThQpIj4pJ2UBJBlkPvJYkuWl6lTNkxmNubRY3zS1Fc6Zo3lC0B4fdmqe6U8eXszeYHN3MhhI92G8pn9M94-CJKrGaveRkuTE4DomGHmzXLNEtbjnp58eb7sHCcDy6yC9OL89_wSIdo3Hi_Ib56fMM_5BaM9W7lpv_AbVO7mc |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Anticipating+Human+Activities+Using+Object+Affordances+for+Reactive+Robotic+Response&rft.jtitle=IEEE+transactions+on+pattern+analysis+and+machine+intelligence&rft.au=Koppula%2C+Hema+S.&rft.au=Saxena%2C+Ashutosh&rft.date=2016-01-01&rft.pub=IEEE&rft.issn=0162-8828&rft.volume=38&rft.issue=1&rft.spage=14&rft.epage=29&rft_id=info:doi/10.1109%2FTPAMI.2015.2430335&rft_id=info%3Apmid%2F26656575&rft.externalDocID=7102751 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0162-8828&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0162-8828&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0162-8828&client=summon |