Coarse-to-Fine Q-attention: Efficient Learning for Visual Robotic Manipulation via Discretisation

We present a coarse-to-fine discretisation method that enables the use of discrete reinforcement learning approaches in place of unstable and data-inefficient actorcritic methods in continuous robotics domains. This approach builds on the recently released ARM algorithm, which replaces the continuou...

Full description

Saved in:
Bibliographic Details
Published inProceedings (IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Online) pp. 13729 - 13738
Main Authors James, Stephen, Wada, Kentaro, Laidlow, Tristan, Davison, Andrew J.
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.06.2022
Subjects
Online AccessGet full text
ISSN1063-6919
DOI10.1109/CVPR52688.2022.01337

Cover

Abstract We present a coarse-to-fine discretisation method that enables the use of discrete reinforcement learning approaches in place of unstable and data-inefficient actorcritic methods in continuous robotics domains. This approach builds on the recently released ARM algorithm, which replaces the continuous next-best pose agent with a discrete one, with coarse-to-fine Q-attention. Given a voxelised scene, coarse-to-fine Q-attention learns what part of the scene to 'zoom' into. When this 'zooming' behaviour is applied iteratively, it results in a near-lossless discretisation of the translation space, and allows the use of a discrete action, deep Q-learning method. We show that our new coarse-to-fine algorithm achieves state-of-the-art performance on several difficult sparsely rewarded RLBench vision-based robotics tasks, and can train real-world policies, tabula rasa, in a matter of minutes, with as little as 3 demonstrations.
AbstractList We present a coarse-to-fine discretisation method that enables the use of discrete reinforcement learning approaches in place of unstable and data-inefficient actorcritic methods in continuous robotics domains. This approach builds on the recently released ARM algorithm, which replaces the continuous next-best pose agent with a discrete one, with coarse-to-fine Q-attention. Given a voxelised scene, coarse-to-fine Q-attention learns what part of the scene to 'zoom' into. When this 'zooming' behaviour is applied iteratively, it results in a near-lossless discretisation of the translation space, and allows the use of a discrete action, deep Q-learning method. We show that our new coarse-to-fine algorithm achieves state-of-the-art performance on several difficult sparsely rewarded RLBench vision-based robotics tasks, and can train real-world policies, tabula rasa, in a matter of minutes, with as little as 3 demonstrations.
Author Laidlow, Tristan
Wada, Kentaro
Davison, Andrew J.
James, Stephen
Author_xml – sequence: 1
  givenname: Stephen
  surname: James
  fullname: James, Stephen
  email: slj12@imperial.ac.uk
  organization: Dyson Robotics Lab, Imperial College London
– sequence: 2
  givenname: Kentaro
  surname: Wada
  fullname: Wada, Kentaro
  email: k.wada18@imperial.ac.uk
  organization: Dyson Robotics Lab, Imperial College London
– sequence: 3
  givenname: Tristan
  surname: Laidlow
  fullname: Laidlow, Tristan
  email: t.laidlow15@imperial.ac.uk
  organization: Dyson Robotics Lab, Imperial College London
– sequence: 4
  givenname: Andrew J.
  surname: Davison
  fullname: Davison, Andrew J.
  email: a.davison@imperial.ac.uk
  organization: Dyson Robotics Lab, Imperial College London
BookMark eNotj9tKAzEURaMo2NZ-gT7kB6aeJDOZxDcZ6wUqatG-ljMziRypSZmkgn9vvTxt9oK9YI_ZUYjBMXYuYCYE2Itm9bSspDZmJkHKGQil6gM2FlpXpbalVodsJECrQlthT9g0pXcAUFIIbc2IYRNxSK7Isbih4PhzgTm7kCmGSz73njraN75wOAQKb9zHga8o7XDDl7GNmTr-gIG2uw3-bPgnIb-m1A0uU_pFp-zY4ya56X9O2OvN_KW5KxaPt_fN1aIgCSoXQvZeVHXfAmhZla2VtSkBwTppei8FIPZY1bY1ulOVaNX-gEZnvAHhwWg1YWd_XnLOrbcDfeDwtbamNlYa9Q1XvVdF
CODEN IEEPAD
ContentType Conference Proceeding
DBID 6IE
6IH
CBEJK
RIE
RIO
DOI 10.1109/CVPR52688.2022.01337
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Proceedings Order Plan (POP) 1998-present by volume
IEEE Xplore All Conference Proceedings
IEEE/IET Electronic Library
IEEE Proceedings Order Plans (POP) 1998-present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
EISBN 1665469463
9781665469463
EISSN 1063-6919
EndPage 13738
ExternalDocumentID 9878928
Genre orig-research
GroupedDBID 6IE
6IH
6IL
6IN
AAWTH
ABLEC
ADZIZ
ALMA_UNASSIGNED_HOLDINGS
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CBEJK
CHZPO
IEGSK
IJVOP
OCL
RIE
RIL
RIO
ID FETCH-LOGICAL-i203t-12df157db006254b927840a09e28df210aada579b86c351b31166ae8f801f0863
IEDL.DBID RIE
IngestDate Wed Aug 27 02:15:09 EDT 2025
IsPeerReviewed false
IsScholarly true
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-i203t-12df157db006254b927840a09e28df210aada579b86c351b31166ae8f801f0863
PageCount 10
ParticipantIDs ieee_primary_9878928
PublicationCentury 2000
PublicationDate 2022-June
PublicationDateYYYYMMDD 2022-06-01
PublicationDate_xml – month: 06
  year: 2022
  text: 2022-June
PublicationDecade 2020
PublicationTitle Proceedings (IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Online)
PublicationTitleAbbrev CVPR
PublicationYear 2022
Publisher IEEE
Publisher_xml – name: IEEE
SSID ssj0003211698
Score 2.5764496
Snippet We present a coarse-to-fine discretisation method that enables the use of discrete reinforcement learning approaches in place of unstable and data-inefficient...
SourceID ieee
SourceType Publisher
StartPage 13729
SubjectTerms Computer vision
Machine vision
Pattern recognition
Q-learning
Robot vision systems
Task analysis
Vision applications and systems; Others; Robot vision
Visualization
Title Coarse-to-Fine Q-attention: Efficient Learning for Visual Robotic Manipulation via Discretisation
URI https://ieeexplore.ieee.org/document/9878928
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV09T8MwELXaTkzlo4hveWDEaRInjs1aWlVIRaWiVbfKjm0UgRpEEwZ-PeckLQIxsCVZEp3tvHv2u3cIXQsNGK1UQEJugKDINCJSC0aYoNbaJIhS7Yji5IGN59H9Ml620M2uFsYYU4nPjOcuq7N8nael2yrrAz_mIuRt1E44q2u1dvspFJgME7ypjgt80R8spjNnZuIEXGHoQa5Df_ZQqSBk1EWT7ctr5ciLVxbKSz9_-TL-9-v2Ue-7WA9PdzB0gFpmfYi6TXaJm7W7OUJykAOJNaTIyQhSS_xInLVmJXa8xcPKSQLucGO4-owhm8WLbFPKVzzLVQ4TDE_kOtv2-8IfmcR3Gfx1TNFIgnpoPho-DcakabBAstCnBQlCbYM40W7pAVFUwp1C-tIXJuTaAhmUUss4EYqzlMaBohBiJg23AGsWuBA9Rp11vjYnCCsVR5JJC-hmIiojQYGpaaYCw1kkBT9FRy5iq7faQ2PVBOvs78fnaM-NWS3JukCd4r00lwD-hbqqRv0LGCevcg
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3PT8IwFG4QD3pCBeNve_Boka1dab0iBBUIEiDcSLt2ZtFsRoYH_3pfx8BoPHjbduny1u57X_u97yF0JQ1gtNYe8YUFgqJCRpSRnHBJoyhqeiw0jij2B7w7YQ-zYFZC15taGGttLj6zdXeZn-WbNFy6rbIb4MdC-mILbQeMsWBVrbXZUaHAZbgURX2c15A3relw5OxMnITL9-uQ7dCfXVRyEOlUUH89_Eo78lJfZroefv5yZvzv--2h2ne5Hh5ugGgflWxygCpFfomL1buoItVKgcZakqWkA8klfiLOXDOXO97idu4lAXe4sFx9xpDP4mm8WKpXPEp1ClMM91USrzt-4Y9Y4bsY_js2K0RBNTTptMetLilaLJDYb9CMeL6JvKBp3OIDqqilO4dsqIa0vjAR0EGljAqaUgse0sDTFELMlRURAFsEbIgeonKSJvYIYa0DpriKAN8so4pJClzNcO1ZwZmS4hhVXcTmbysXjXkRrJO_H1-ine6435v37gePp2jXfb-VQOsMlbP3pT2HVCDTF_kM-AJ2YbK_
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=proceeding&rft.title=Proceedings+%28IEEE+Computer+Society+Conference+on+Computer+Vision+and+Pattern+Recognition.+Online%29&rft.atitle=Coarse-to-Fine+Q-attention%3A+Efficient+Learning+for+Visual+Robotic+Manipulation+via+Discretisation&rft.au=James%2C+Stephen&rft.au=Wada%2C+Kentaro&rft.au=Laidlow%2C+Tristan&rft.au=Davison%2C+Andrew+J.&rft.date=2022-06-01&rft.pub=IEEE&rft.eissn=1063-6919&rft.spage=13729&rft.epage=13738&rft_id=info:doi/10.1109%2FCVPR52688.2022.01337&rft.externalDocID=9878928