Curieux.JY
  • Post
  • Note
  • Jung Yeon Lee

On this page

  • Programs
  • ๐Ÿ“… ์ผ์ • ์š”์•ฝ
    • 2025-09-27 (ํ† ) โ€” ์›Œํฌ์ˆ ๋ฐ์ด
    • 2025-09-28 (์ผ) โ€” ๋ฉ”์ธ ์ปจํผ๋Ÿฐ์Šค Day 1
    • 2025-09-29 (์›”) โ€” ๋ฉ”์ธ ์ปจํผ๋Ÿฐ์Šค Day 2
    • 2025-09-30 (ํ™”) โ€” ๋ฉ”์ธ ์ปจํผ๋Ÿฐ์Šค Day 3 / ํ‚ค๋…ธํŠธ ๋ฐ์ด
  • ๐ŸŽค Oral ์„ ํƒ ์„ธ์…˜
    • Oral 6 โ€” Humanoid & Hardware (ํ•ต์‹ฌ ํ‚ค์›Œ๋“œ: ์†ยท์ด‰๊ฐยทReal World)
    • Oral 3 โ€” Manipulation II (ํ•ต์‹ฌ ํ‚ค์›Œ๋“œ: ์†Œํ”„ํŠธํ•ธ๋“œยท์ด‰๊ฐํ‘œํ˜„ยทTactile ์ƒ์„ฑ)
  • ๐Ÿงช Poster Spotlights / Posters (์šฐ์„ ์ˆœ์œ„ ์…€๋ ‰์…˜)
    • Spotlight 5 & Poster 3 (์ด‰๊ฐยทํœด๋จธ๋…ธ์ด๋“œ Dexterity ์ง‘์ค‘)
    • Spotlight 6 & Poster 3 (TactileยทRewardยทTeaching ํŒŒ์ดํ”„๋ผ์ธ)
    • Spotlight 4 & Poster 2
  • ๐Ÿ—๏ธ Keynotes / EC Keynotes(์ฐธ๊ณ )
    • โœ… ์ด๋™/์ฒญ์ทจ ํŒ
  • โœ‹ ์งˆ๋ฌธ ๋ฆฌ์ŠคํŠธ
  • +++

๐ŸงฉCoRL 2025

corl
2025
conference
Plan & Search for meaningful insights
Published

August 26, 2025

Programs

โ€œ์†/์ด‰๊ฐ/๋Ÿฌ๋‹โ€ ์ค‘์‹ฌ ์ฝ”์Šค์— ๋งž์ถฐ ์ •๋ฆฌํ•œ CoRL 2025 Schedule ์ •๋ฆฌ

๐Ÿ“… ์ผ์ • ์š”์•ฝ

  • 9/27(ํ† ): ์›Œํฌ์ˆ ์ง‘์ค‘ โ€” 2nd Workshop on Dexterous Manipulation (์†ยท์ด‰๊ฐ ์—ฐ๊ตฌ์ž ๋„คํŠธ์›Œํ‚น/ํŠœํ† ๋ฆฌ์–ผ์— ์ตœ์ ) (Dexterous Manipulation Workshop)
  • 9/28(์ผ): ๋ฉ”์ธ์ปจํผ๋Ÿฐ์Šค ์˜ค๋Ÿด ์œ„์ฃผ โ€” Oral 6 (Humanoid & Hardware) โ†’ Oral 3 (Manipulation II)
  • 9/29(์›”): ํฌ์Šคํ„ฐ ์ŠคํฌํŠธ๋ผ์ดํŠธ & ํฌ์Šคํ„ฐ ๋ž€ ํ™œ์šฉ โ€” Spotlight 5, 6 ์ค‘์‹ฌ(์ค‘๊ฐ„์ค‘๊ฐ„ ๋ฐ๋ชจ/๋ถ€์Šค)
  • 9/30(ํ™”): ๋‚จ์€ ํฌ์Šคํ„ฐ/๋ฐ๋ชจ & ํ‚ค๋…ธํŠธ/EC Keynote ์ฒดํฌ(์‹œ๊ฐ„ ๋งž์ถฐ ์ด๋™) (CoRL)

2025-09-27 (ํ† ) โ€” ์›Œํฌ์ˆ ๋ฐ์ด

  • 09:00 โ€“ 09:30 : ํ–‰์‚ฌ ๋“ฑ๋ก ๋ฐ ์˜คํ”„๋‹ (์˜ˆ์ •)
  • 09:30 โ€“ 10:30 : RemembeRL Workshop โ€“ Invited Talk 1 (์›Œํฌ์ˆ ์‹œ์ž‘)
    (+ ์ค‘๊ฐ„ ํœด์‹: 10:30 โ€“ 11:00)
  • 11:00 โ€“ 12:30 : ์›Œํฌ์ˆ ์—ฐ์‚ฌ ๋ฐœํ‘œ ๋ฐ ์„ธ์…˜ (TBD)
  • 12:30 โ€“ 13:30 : ์ ์‹ฌ์‹œ๊ฐ„
  • 13:30 โ€“ 15:00 : ์›Œํฌ์ˆ ํ›„๋ฐ˜ ์„ธ์…˜ ๋ฐ Poster Session 2
    (+ ํœด์‹: 15:00 โ€“ 15:30)
  • 15:30 โ€“ 16:30 : Poster Spotlights / ํŒจ๋„ ํ† ๋ก 
  • 16:30 โ€“ 16:40 : ์›Œํฌ์ˆ ์ข…๋ฃŒ ๋ฐ ํด๋กœ์ง• ๋ฆฌ๋งˆํฌ

2025-09-28 (์ผ) โ€” ๋ฉ”์ธ ์ปจํผ๋Ÿฐ์Šค Day 1

  • 09:00 โ€“ 10:00 : ์ปจํผ๋Ÿฐ์Šค ๋“ฑ๋ก ๋ฐ ์˜คํ”„๋‹
  • 10:00 โ€“ 12:00 : Oral Session 6 (DexUMI, DexSkin ์ฐธ์„)
  • 12:00 โ€“ 13:00 : ์ ์‹ฌ
  • 13:00 โ€“ 15:00 : Oral Session 3 (KineSoft, Tactile Beyond Pixels, Cross-Sensor Touch Generation ์ฐธ์„)
  • 15:00 โ€“ 15:30 : ํœด์‹ / ๋„คํŠธ์›Œํ‚น
  • 15:30 โ€“ 18:00 : Spotlight 5 Poster (Self-supervised perception, Sim-to-Real RL, Crossing the Gap)
  • 18:00 โ€“ : ์ž์œ  ์‹œ๊ฐ„ / ๋ฐ๋ชจ ๋ถ€์Šค ํƒ๋ฐฉ

2025-09-29 (์›”) โ€” ๋ฉ”์ธ ์ปจํผ๋Ÿฐ์Šค Day 2

  • 09:00 โ€“ 10:30 : Poster Spotlight 6 (VT-Refine, KineDex, LocoTouch, Text2Touch)
  • 10:30 โ€“ 11:30 : ์ถ”๊ฐ€ ๊ด€์‹ฌ ํฌ์Šคํ„ฐ ํƒ๋ฐฉ / ํœด์‹
  • 11:30 โ€“ 13:00 : Early Career Keynotes (Yuan, Fazeli, Pinto)
  • 13:00 โ€“ 14:00 : ์ ์‹ฌ
  • 14:00 โ€“ 15:30 : ๊ด€์‹ฌ ๋…ผ๋ฌธ ์„ธ์…˜ (์žฌ๊ฒ€ํ†  ๋ฐ ๋ถ€์Šค ๋ฐฉ๋ฌธ)
  • 15:30 โ€“ 16:00 : ํœด์‹
  • 16:00 โ€“ 18:00 : Poster ์Šค์œ• ๋˜๋Š” ์ถ”๊ฐ€ ๋„คํŠธ์›Œํ‚น
  • 18:00 โ€“ : ์ž์œ  ์‹œ๊ฐ„ ๋˜๋Š” ์ €๋… ์„ธ์…˜ ์ฐธ์—ฌ

2025-09-30 (ํ™”) โ€” ๋ฉ”์ธ ์ปจํผ๋Ÿฐ์Šค Day 3 / ํ‚ค๋…ธํŠธ ๋ฐ์ด

  • 09:00 โ€“ 10:00 : ๋“ฑ๋ก / ๋งˆ๋ฌด๋ฆฌ ์ค€๋น„
  • 10:00 โ€“ 10:30 : Jun-Ho Oh ๊ธฐ์กฐ ๊ฐ•์—ฐ (โ€œThe Golden Age of Humanoid Robotsโ€)
  • 10:30 โ€“ 11:00 : Kristen Grauman ๊ธฐ์กฐ ๊ฐ•์—ฐ (โ€œSkill learning from videoโ€)
  • 11:00 โ€“ 12:00 : ๋‚จ์€ ๊ด€์‹ฌ Oral/Poster ์„ธ์…˜ ๋˜๋Š” ๋ฐ๋ชจ ํƒ๋ฐฉ
  • 12:00 โ€“ 13:00 : ์ ์‹ฌ
  • 13:00 โ€“ 15:00 : ๋ถ€์Šค ๋ฐฉ๋ฌธ / ๋„คํŠธ์›Œํ‚น
  • 15:00 โ€“ 16:00 : ๋‚จ์€ ๋ฐœํ‘œ ์ฒญ์ทจ ๋˜๋Š” ๋ฐœํ‘œ์ž Q&A
  • 16:00 โ€“ 18:00 : ๋งˆ๋ฌด๋ฆฌ ์ •๋ฆฌ / ๋„คํŠธ์›Œํ‚น ๋งˆ๊ฐ

๐ŸŽค Oral ์„ ํƒ ์„ธ์…˜

Oral 6 โ€” Humanoid & Hardware (ํ•ต์‹ฌ ํ‚ค์›Œ๋“œ: ์†ยท์ด‰๊ฐยทReal World)

  • DexUMI: Using Human Hand as the Universal Manipulation Interface for Dexterous Manipulation โ€” arXiv
    • ํ•œ์ค„ ํ•ต์‹ฌ: ์† ์ฐฉ์šฉ ์™ธ๊ณจ๊ฒฉ+๋น„์ „ ์ธํŽ˜์ธํŒ…์œผ๋กœ ์ธ๊ฐ„ ์† ๋™์ž‘์„ ๋‹ค์–‘ํ•œ ๋กœ๋ด‡ ํ•ธ๋“œ๋กœ ์ „์ด, Real World ํ‰๊ท  86% ์„ฑ๊ณต.
    • ๋ฐœํ‘œ๊ฐ€์น˜ ์˜ˆ์ธก: ๐Ÿ”ฅ Must-see โ€” ๋ฒ”์šฉ ํ•ธ๋“œ ์ „์ด/๋ฐ์ดํ„ฐ ์ˆ˜์ง‘ ํŒŒ์ดํ”„๋ผ์ธ์ด ์‹ค์ „์„ฑ ๋†’์Œ. (arXiv)
  • DexSkin: High-Coverage Conformable Robotic Skin for Learning Contact-Rich Manipulation โ€” Project page
    • ํ•œ์ค„ ํ•ต์‹ฌ: ์œ ์—ฐํ•œ ๊ณ ๋ฐ€๋„ ์ •์ „์šฉ๋Ÿ‰ํ˜• e-skin์œผ๋กœ ์†๊ฐ€๋ฝ ์ „๋ฉด/๋ฐฐ๋ฉด์„ ์ด‰๊ฐ์œผ๋กœ ๋ฎ์–ด ์ ‘์ด‰ ํ’๋ถ€ ๊ณผ์ œ์˜ ํ•™์Šต/์ „๋‹ฌ ์‹œ์—ฐ.
    • ๋ฐœํ‘œ๊ฐ€์น˜ ์˜ˆ์ธก: ๐Ÿ”ฅ Must-see โ€” ์ €๊ฐ€ยท๋Œ€๋ฉด์  ์ด‰๊ฐ ํ•˜๋“œ์›จ์–ด์˜ Real World ํ•™์Šต ์ ์šฉ์ด ๋งค๋ ฅ์ . (arXiv ์˜ˆ์ •) (DexSkin)

Oral 3 โ€” Manipulation II (ํ•ต์‹ฌ ํ‚ค์›Œ๋“œ: ์†Œํ”„ํŠธํ•ธ๋“œยท์ด‰๊ฐํ‘œํ˜„ยทTactile ์ƒ์„ฑ)

  • KineSoft: Learning Proprioceptive Manipulation Policies with Soft Robot Hands โ€” arXiv
    • ํ•œ์ค„ ํ•ต์‹ฌ: ์†Œํ”„ํŠธํ•ธ๋“œ ๋‚ด๋ถ€ ๋ณ€ํ˜•/์ŠคํŠธ๋ ˆ์ธ ๊ธฐ๋ฐ˜ ๊ณ ์œ ๊ฐ๊ฐ์œผ๋กœ ํ‚ค๋„ค์Šคํ‹ฑ ํ‹ฐ์นญ+ํ˜•์ƒ์กฐ๊ฑด ์ œ์–ด ๊ฒฐํ•ฉํ•œ ๋ชจ์‚ฌํ•™์Šต ํ”„๋ ˆ์ž„์›Œํฌ.
    • ๋ฐœํ‘œ๊ฐ€์น˜ ์˜ˆ์ธก: ๐Ÿ‘ High โ€” ์†Œํ”„ํŠธํ•ธ๋“œ ์‹ค์‚ฌ์šฉ ๋ฐ๋ชจ/์ •ํ™•๋„ ํ–ฅ์ƒ ๊ทผ๊ฑฐ ๋ช…ํ™•. (arXiv)
  • Tactile Beyond Pixels: Multisensory Touch Representations for Robot Manipulation โ€” arXiv
    • ํ•œ์ค„ ํ•ต์‹ฌ: ์ด๋ฏธ์ง€ยท์˜ค๋””์˜คยท๋ชจ์…˜ยท์••๋ ฅ 4๋ชจ๋‹ฌ ์ด‰๊ฐํ‘œํ˜„(Sparsh-X) ์‚ฌ์ „ํ•™์Šต์œผ๋กœ ์ •์ฑ… ์„ฑ๊ณต๋ฅ  +63%, ๊ฐ•๊ฑด์„ฑ +90%.
    • ๋ฐœํ‘œ๊ฐ€์น˜ ์˜ˆ์ธก: ๐Ÿ”ฅ Must-see โ€” ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ์ด‰๊ฐํ‘œํ˜„์˜ ์Šค์ผ€์ผยท์ผ๋ฐ˜ํ™” ๊ทผ๊ฑฐ ์ œ์‹œ. (arXiv)
  • Cross-Sensor Touch Generation โ€” (์˜ˆ์ •/์ •๋ณด ์ œํ•œ)
    • ํ•œ์ค„ ํ•ต์‹ฌ: ์ด๊ธฐ์ข… ์ด‰๊ฐ์„ผ์„œ ๊ฐ„ ์ƒ์„ฑ/๋ฒˆ์—ญ์œผ๋กœ ๋ฐ์ดํ„ฐ ์ฆ๊ฐ•ยทํ‘œํ˜„ ์ •ํ•ฉ์„ ๊ฒจ๋ƒฅํ•œ ์ž‘์—….
    • ๋ฐœํ‘œ๊ฐ€์น˜ ์˜ˆ์ธก: ๐Ÿ‘€ Watch โ€” ์„ธ๋ถ€ ๋ฉ”ํŠธ๋ฆญ ๊ณต๊ฐœ ์—ฌ๋ถ€์— ๋”ฐ๋ผ ๊ฐ€์น˜ ์ƒํ–ฅ ๊ฐ€๋Šฅ. (arXiv)

๐Ÿงช Poster Spotlights / Posters (์šฐ์„ ์ˆœ์œ„ ์…€๋ ‰์…˜)

Spotlight 5 & Poster 3 (์ด‰๊ฐยทํœด๋จธ๋…ธ์ด๋“œ Dexterity ์ง‘์ค‘)

  • Self-supervised perception for tactile skin covered dexterous hands โ€” arXiv
    • ํ•œ์ค„ ํ•ต์‹ฌ: Self-supervised Sparsh-skin ์ธ์ฝ”๋”๋กœ ์† ์ „์ฒด์˜ ์ž๊ธฐ์ž์„์‹ ์Šคํ‚จ ์‹ ํ˜ธ๋ฅผ ์ž ์žฌํ‘œํ˜„ํ™”, ์„ฑ๋Šฅ +41%/ํ‘œ๋ณธํšจ์œจ ๊ฐœ์„ .
    • ๋ฐœํ‘œ๊ฐ€์น˜ ์˜ˆ์ธก: ๐Ÿ”ฅ Must-see โ€” ์ „๋ฉด ์Šคํ‚จ ํ™œ์šฉ ํผ์…‰์…˜ยท์ •์ฑ… ๋ชจ๋‘ ๊ฐœ์„ . (arXiv)
  • Sim-to-Real Reinforcement Learning for Vision-Based Dexterous Manipulation on Humanoids โ€” arXiv
    • ํ•œ์ค„ ํ•ต์‹ฌ: ํœด๋จธ๋…ธ์ด๋“œ ์–‘์† Dexterity์—์„œ ์‹ค-์‹œ๋ฎฌ ์ž๋™ํŠœ๋‹, Reward ์„ค๊ณ„ ์ผ๋ฐ˜ํ™”, ๋ถ„ํ• ์ฆ๋ฅ˜๋กœ ์‹œ์—ฐยท์ผ๋ฐ˜ํ™” ํ™•๋ณด.
    • ๋ฐœํ‘œ๊ฐ€์น˜ ์˜ˆ์ธก: ๐Ÿ‘ High โ€” ๋ฐ๋ชจ ์˜์กดโ†“, RL ๋‹จ๋…์œผ๋กœ ์ ‘์ด‰ํ’๋ถ€ ๊ณผ์ œ ๋‹ฌ์„ฑ. (arXiv)
  • Crossing the Human-Robot Embodiment Gap with Sim-to-Real RL using One Human Demonstration โ€” arXiv
    • ํ•œ์ค„ ํ•ต์‹ฌ: ๋‹จ 1๊ฐœ ์ธ๊ฐ„ RGB-D ๋ฐ๋ชจ์—์„œ ๊ฐ์ฒด๊ถค์  Reward+ํ”„๋ฆฌ-๊ทธ๋žฉ ํฌ์ฆˆ ์ดˆ๊ธฐํ™”๋กœ ํœด๋จผ-๋กœ๋ด‡ ๊ฒฉ์ฐจ๋ฅผ RL๋กœ ๋ธŒ๋ฆฌ์ง€.
    • ๋ฐœํ‘œ๊ฐ€์น˜ ์˜ˆ์ธก: ๐Ÿ”ฅ Must-see โ€” ๋ฐ์ดํ„ฐ ๋น„์šฉ ํ˜์‹ ยทํœด๋จผโ†’๋กœ๋ด‡ ์ „์ด ์‹คํšจ์„ฑ ํผ. (arXiv)

Spotlight 6 & Poster 3 (TactileยทRewardยทTeaching ํŒŒ์ดํ”„๋ผ์ธ)

  • VT-Refine: Learning Bimanual Assembly with Visuo-Tactile Feedback via Simulation Fine-Tuning โ€” OpenReview
    • ํ•œ์ค„ ํ•ต์‹ฌ: ์‹ค๋ฐ๋ชจ+๊ณ ์ถฉ์‹ค Tactile ์‹œ๋ฎฌ+RL ์กฐํ•ฉ์œผ๋กœ ์ •๋ฐ€ ์–‘ํŒ” ์กฐ๋ฆฝ ํ•™์Šต(์‹œ์—ฐ/์–ด๋ธ”๋ ˆ์ด์…˜ ํฌํ•จ).
    • ๋ฐœํ‘œ๊ฐ€์น˜ ์˜ˆ์ธก: ๐Ÿ‘ High โ€” ๋น„์ฃผ-์ด‰๊ฐ ํ†ตํ•ฉ์˜ ์ •์„ ์‚ฌ๋ก€. (OpenReview)
  • KineDex: Learning Tactile-Informed Visuomotor Policies via Kinesthetic Teaching for Dexterous Manipulation โ€” arXiv
    • ํ•œ์ค„ ํ•ต์‹ฌ: ํ•ธ๋“œ-์˜ค๋ฒ„-ํ•ธ๋“œ ํ‚ค๋„ค์Šคํ‹ฑ ํ‹ฐ์นญ+๋น„์ฃผ-Tactile ์ •์ฑ…+ํž˜์ œ์–ด๋กœ ์ ‘์ด‰ํ’๋ถ€ ๊ณผ์ œ 74.4% ๋‹ฌ์„ฑ.
    • ๋ฐœํ‘œ๊ฐ€์น˜ ์˜ˆ์ธก: ๐Ÿ‘ High โ€” ํฌ์Šค/์ด‰๊ฐ ๊ฒฐํ•ฉ ์‹ค์„ฑ๋Šฅ ์ˆ˜์น˜ ์ œ์‹œ. (arXiv)
  • Text2Touch: Tactile In-Hand Manipulation with LLM-Designed Reward Functions โ€” Project page
    • ํ•œ์ค„ ํ•ต์‹ฌ: LLM ๊ธฐ๋ฐ˜ Reward ์„ค๊ณ„๋ฅผ ์‹ค์ œ ๋น„์ „-๊ธฐ๋ฐ˜ Tactile ์ธ-ํ•ธ๋“œ ํšŒ์ „์— ์ ์šฉํ•ด Reward๊ณตํ•™ ๋น„์šฉ ์ ˆ๊ฐ.
    • ๋ฐœํ‘œ๊ฐ€์น˜ ์˜ˆ์ธก: ๐Ÿ‘€ Watch โ€” ์‹ค์ œ Tactile ํ•˜๋“œ์›จ์–ด์™€ LLM Reward์˜ ์ ‘์  ํ™•์ธ ๊ฐ€์น˜. (efi robotics)

Spotlight 4 & Poster 2

์ด๋™ ์‹œ๊ฐ„ ์—ฌ์œ  ์‹œ ๊ฐ•์ถ”

  • LocoTouch: Learning Dexterous Quadrupedal Transport with Tactile Sensing โ€” arXiv
    • ํ•œ์ค„ ํ•ต์‹ฌ: ๋“ฑ๋ฉด Tactile ์–ด๋ ˆ์ด+์‹œ๋ฎฌ-์ •ํ•ฉ์œผ๋กœ ๋ฌด๊ณ ์ • ์›ํ†ต๋ฌผ ์žฅ๊ฑฐ๋ฆฌ ์šด๋ฐ˜์„ 4์กฑ ๋กœ๋ด‡์ด ์ œ๋กœ์ƒท ์‹ค์ „ ์ˆ˜ํ–‰.
    • ๋ฐœํ‘œ๊ฐ€์น˜ ์˜ˆ์ธก: ๐Ÿ‘ High โ€” ์ด‰๊ฐ์„ ์ด๋™์ฒด(๋กœ์ฝ”๋ชจ์…˜)์— ์ ‘๋ชฉํ•œ ์ฐธ์‹  ์‘์šฉ. (arXiv)

๐Ÿ—๏ธ Keynotes / EC Keynotes(์ฐธ๊ณ )

  • ๋ฉ”์ธ ํ‚ค๋…ธํŠธ & EC Keynotes(์˜ˆ: Wenzhen Yuan, Nima Fazeli, Lerrel Pinto) โ€” ์ด‰๊ฐ/์กฐ์ž‘ ์ฃผ์ œ์™€ ์ง์ ‘ ์—ฐ๊ด€๋œ ๊ด€์  ์—…๋ฐ์ดํŠธ์— ์œ ์ต. ์„ธ์…˜ ์‹œ๊ฐ„๋Œ€๋Š” ๊ณต์‹ ํŽ˜์ด์ง€ ์ˆ˜์‹œ ํ™•์ธ. (CoRL)

โœ… ์ด๋™/์ฒญ์ทจ ํŒ

  • ์šฐ์„ ์ˆœ์œ„ ๊ทœ์น™: (1) Oral 6ยท3 > (2) Spotlight 5ยท6 > (3) Spotlight 4 (์ถฉ๋Œ ์‹œ).
  • ํฌ์Šคํ„ฐ ์ „๋žต: ์œ„ ๋ชฉ๋ก์„ ์šฐ์„  ๋ผ์šฐํŒ…ํ•˜๊ณ , ๊ฐ™์€ ์„น์…˜๋Œ€ ์ธ์ ‘ ํฌ์Šคํ„ฐ(์†/์ด‰๊ฐ ํ‚ค์›Œ๋“œ)๊นŒ์ง€ ๋น ๋ฅด๊ฒŒ ์Šค์œ•.
  • ๋ฒ„ํผ ํ™•๋ณด: ์˜ค๋Ÿดโ†’ํฌ์Šคํ„ฐ ์ด๋™ ์ „ํ›„ ์ตœ์†Œ 10โ€“15๋ถ„ ์ด๋™ยท๋Œ€ํ™” ๋ฒ„ํผ ํ™•๋ณด(์งˆ๋ฌธ/๋„คํŠธ์›Œํ‚น ํฌํ•จ).
  • ์ž๋ฃŒ ํšŒ์ˆ˜: arXiv/ํ”„๋กœ์ ํŠธ ํŽ˜์ด์ง€๋ฅผ ๋ฏธ๋ฆฌ ์ฆ๊ฒจ์ฐพ๊ธฐ(๋ชจ๋ฐ”์ผ)ํ•ด ๋ถ€์Šค/๋ฐ๋ชจ์—์„œ ๋ฐ”๋กœ ๋ ˆํผ๋Ÿฐ์Šค ๊ณต์œ .

โœ‹ ์งˆ๋ฌธ ๋ฆฌ์ŠคํŠธ

์„ธ์…˜ / ๋ฐœํ‘œ์ž ์งˆ๋ฌธ ๋‚ด์šฉ (ํ•œ๊ตญ์–ด) ์งˆ๋ฌธ ๋‚ด์šฉ (์˜์–ด)
DexUMI (Oral 6) ์ฐฉ์šฉํ˜• ์™ธ๊ณจ๊ฒฉ ์‚ฌ์šฉ์ž ๋งž์ถค์€ ์–ด๋–ป๊ฒŒ ์ด๋ฃจ์–ด์ง€๋‚˜์š”? How is user customization achieved in the wearable exoskeleton?
๋น„์ „ ์ธํŽ˜์ธํŒ… ๊ธฐ์ˆ ์ด ๋‹ค์–‘ํ•œ ๋กœ๋ด‡ ํ•ธ๋“œ์— ์–ด๋–ป๊ฒŒ ์ผ๋ฐ˜ํ™”๋˜์—ˆ๋‚˜์š”? How does the vision-based inpainting generalize across different robotic hands?
์„œ๋กœ ๋‹ค๋ฅธ ๋กœ๋ด‡ ํ•ธ๋“œ ํ”Œ๋žซํผ ๊ฐ„์˜ ์„ฑ๋Šฅ ์ฐจ์ด๊ฐ€ ์žˆ์—ˆ๋‚˜์š”? Were there performance differences among different robotic hand platforms?
KineSoft (Oral 3) ๋‚ด๋ถ€ ์ŠคํŠธ๋ ˆ์ธ ์„ผ์„œ์˜ ํ•ด์ƒ๋„์™€ ์‘๋‹ต ์†๋„๋Š” ์–ด๋–ป๊ฒŒ ๋˜๋‚˜์š”? What is the resolution and response time of the internal strain sensor array?
ํ‚ค๋„ค์Šคํ‹ฑ ํ‹ฐ์นญ ๋ฐฉ์‹๊ณผ ๊ธฐ์กด IL ๋ฐฉ์‹์˜ ์„ฑ๋Šฅ ์ฐจ์ด๋Š” ๋ฌด์—‡์ธ๊ฐ€์š”? What are the performance differences between kinesthetic teaching and traditional imitation learning?
ํ‹ฐ์นญ ๊ธฐ๋ฐ˜์œผ๋กœ ์ˆ˜์ง‘๋œ ๋ฐ์ดํ„ฐ๋ฅผ ํ†ตํ•œ ์žฌํ˜„์„ฑ์€ ์–ด๋–ค ์ˆ˜์ค€์ธ๊ฐ€์š”? How reproducible are the demonstrations collected via kinesthetic teaching across different object types?
Tactile Beyond Pixels (Oral 3) ๋„ค ๊ฐ€์ง€ ์ด‰๊ฐ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ ์ค‘ ๊ฐ€์žฅ ๊ธฐ์—ฌ๊ฐ€ ์ปธ๋˜ downstream task๋Š” ๋ฌด์—‡์ธ๊ฐ€์š”? Which downstream task benefited most from the four tactile modalities?
๊ฐ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ์˜ ์ค‘์š”๋„๋Š” ์–ด๋–ป๊ฒŒ ํ‰๊ฐ€ํ•˜์…จ๋‚˜์š”? How were the importance levels of each tactile modality assessed?
Self-supervised ์‚ฌ์ „ํ•™์Šต ํ›„, ์‹ค์ œ ์ •์ฑ… ํ•™์Šต์—๋Š” ์–ด๋–ป๊ฒŒ ํ†ตํ•ฉ๋˜์—ˆ๋‚˜์š”? After self-supervised pretraining, how was it integrated into actual policy learning?
Sparsh-skin (Poster) tactile ์Šคํ‚จ Self-supervised ํ•™์Šต์—์„œ ์‚ฌ์šฉํ•œ ์ฃผ์š” ํ”„๋ฆฌํ…์ŠคํŠธ ์ž‘์—…์€ ๋ฌด์—‡์ธ๊ฐ€์š”? What were the main pretext tasks used in self-supervised learning for tactile skin?
latent ํ‘œํ˜„์—์„œ ์–ด๋–ค ํŠน์ง•์ด ๊ฐ€์žฅ ์ค‘์š”ํ•˜๋‹ค๊ณ  ๋ณด์‹œ๋‚˜์š”? Which features in the latent representation do you consider most crucial?
์ด ํ‘œํ˜„์„ ํ™œ์šฉํ•œ downstream manipulation task์˜ ์„ฑ๋Šฅ ํ–ฅ์ƒ์€ ์–ด๋А ์ •๋„์ธ๊ฐ€์š”? How much did manipulation task performance improve using these representations?
Sim-to-Real RL (Poster) ์‹œ๋ฎฌ๋ ˆ์ดํ„ฐ๋ฅผ ํ˜„์‹ค์— ๋งž์ถ”๊ธฐ ์œ„ํ•ด ์กฐ์ •ํ•œ ํ•ต์‹ฌ ํŒŒ๋ผ๋ฏธํ„ฐ๋Š” ๋ฌด์—‡์ธ๊ฐ€์š”? What were the key simulation parameters tuned for sim-to-real transfer?
์ผ๋ฐ˜ํ™” ๊ณผ์ •์—์„œ ๊ฐ€์žฅ ์–ด๋ ค์› ๋˜ ์ƒํ™ฉ์€ ๋ฌด์—‡์ด์—ˆ๋‚˜์š”? What was the most challenging situation during the generalization phase?
์‹ค์ œ ํ™˜๊ฒฝ์—์„œ RL ์ •์ฑ…์˜ ์•ˆ์ •์„ฑ์„ ํ™•๋ณดํ•˜๊ธฐ ์œ„ํ•œ ์ „๋žต์€ ๋ฌด์—‡์ธ๊ฐ€์š”? What strategies were used to ensure RL policy stability in real-world settings?
Crossing the Gap (Poster) ๋‹จ 1๊ฐœ์˜ RGB-D ๋ฐ๋ชจ๋งŒ ์‚ฌ์šฉํ•˜๋Š” ๊ธฐ๋ฒ•์˜ ์ฃผ์š” ์˜๋„๋Š” ๋ฌด์—‡์ธ๊ฐ€์š”? What is the core intention behind using only one RGB-D demonstration?
๋‹จ์ผ ๋ฐ๋ชจ๊ฐ€ ๊ณผ์ œ ๋ณต์žก๋„์— ๋”ฐ๋ผ ์–ด๋–ป๊ฒŒ ์„ฑ๋Šฅ์— ์˜ํ–ฅ์„ ๋ฏธ์ณค๋‚˜์š”? How did a single demonstration impact performance depending on task complexity?
์ด ๋ฐฉ์‹์ด ์‹œ๋ฎฌ ๊ธฐ๋ฐ˜ ํ•™์Šต๊ณผ ๊ฒฐํ•ฉ๋  ๊ฒฝ์šฐ ์–ด๋–ค ์‹œ๋„ˆ์ง€๊ฐ€ ์žˆ๋‚˜์š”? What synergies arise when combining this method with sim-based learning?
VT-Refine (Poster) ์‹œ๊ฐ ๋ฐ ์ด‰๊ฐ ์ •๋ณด๋Š” ์–ด๋А ์‹œ์ ์—์„œ ์œตํ•ฉ๋˜์—ˆ๋‚˜์š”? At what point are visual and tactile inputs fused in the control loop?
์–‘์† ์กฐ๋ฆฝ ๊ณผ์ œ์˜ ๋‚œ์ด๋„๋Š” ์–ด๋–ป๊ฒŒ ์ •์˜๋˜์—ˆ๋‚˜์š”? How was the difficulty level of the bimanual assembly task defined?
fine-tuning ์ „๋žต๊ณผ ๋ฐ˜๋ณต ์ œ์–ด ๊ตฌ์กฐ๋ฅผ ์–ด๋–ป๊ฒŒ ์„ค๊ณ„ํ•˜์…จ๋‚˜์š”? How did you design the fine-tuning strategy and iterative control structure?
KineDex (Poster) ์‹œ๋ฎฌ ํ™˜๊ฒฝ์˜ ๋…ธ์ด์ฆˆ ๋ฐ ์˜ค์ฐจ์— ๋Œ€ํ•ด ์–ด๋–ป๊ฒŒ ๊ฐ•๊ฑด์„ฑ์„ ํ™•๋ณดํ•˜์…จ๋‚˜์š”? How did you ensure policy robustness against noise/errors in simulation?
ํฌ์Šค/์ด‰๊ฐ ๊ธฐ๋ฐ˜ ์ ‘์ด‰ ์กฐ์ž‘์˜ ์‹คํŒจ์œจ์€ ์–ด๋А ์ •๋„์˜€๋‚˜์š”? What was the failure rate for force/tactile-based contact manipulation?
ํ‹ฐ์นญ ๋ฐ์ดํ„ฐ์—์„œ ์ •์ฑ… ํ•™์Šต ์ „์ด ์‹œ ์˜ค๋ฒ„ํ—ค๋“œ๋Š” ์–ด๋–ป๊ฒŒ ๊ด€๋ฆฌํ•˜์…จ๋‚˜์š”? How did you manage overheads during the transfer from teaching data to policy learning?
LocoTouch (Poster) ์ด‰๊ฐ ์–ด๋ ˆ์ด์˜ ํ˜•ํƒœ์™€ ์œ„์น˜๋Š” ์–ด๋–ป๊ฒŒ ์„ค๊ณ„๋˜์—ˆ๋‚˜์š”? How were the shape and placement of the tactile array designed?
zero-shot ์‹œ๋ฎฌโ†’์‹ค ์ „์ด์— ํฌํ•จ๋œ ๋„๋ฉ”์ธ ๋žœ๋คํ™” ์š”์†Œ๋Š” ๋ฌด์—‡์ธ๊ฐ€์š”? What domain randomization factors were included in the zero-shot sim-to-real transfer?
์›ํ†ต ์šด๋ฐ˜ ๊ณผ์ œ์—์„œ ์‹คํŒจ ์‚ฌ๋ก€๋Š” ๋ฌด์—‡์ด์—ˆ๊ณ  ์–ด๋–ป๊ฒŒ ๊ฐœ์„ ํ–ˆ๋‚˜์š”? What were the failure cases in the cylinder transport task, and how were they improved?

+++

  • FFHFlow: A Flow-based Variational Approach for Learning Diverse Dexterous Grasps with Shape-Aware Introspection
  • Sim-to-Real Reinforcement Learning for Vision-Based Dexterous Manipulation on Humanoids
  • RobustDexGrasp: Robust Dexterous Grasping of General Objects
  • LodeStar: Long-horizon Dexterity via Synthetic Data Augmentation from Human Demonstrations
  • DexTrack: Towards Generalizable Neural Tracking Control for Dexterous Manipulation from Human References
  • Morphologically Symmetric Reinforcement Learning for Ambidextrous Bimanual Manipulation
  • Vision in Action: Learning Active Perception from Human Demonstrations
  • Point Policy: Unifying Observations and Actions with Key Points for Robot Manipulation
  • CDP: Towards Robust Autoregressive Visuomotor Policy Learning via Causal Diffusion
  • D-Cubed: Latent Diffusion Trajectory Optimisation for Dexterous Deformable Manipulation
  • Humanoid Policy ~ Human Policy
  • ClutterDexGrasp: A Sim-to-Real System for General Dexterous Grasping in Cluttered Scenes
  • RoboArena: Distributed Real-World Evaluation of Generalist Robot Policies
  • Training Strategies for Efficient Embodied Reasoning
  • ฯ€0.5: a Vision-Language-Action Model with Open-World Generalization
  • DemoSpeedup: Accelerating Visuomotor Policies via Entropy-Guided Demonstration Acceleration
  • ImMimic: Cross-Domain Imitation from Human Videos via Mapping and Interpolation
  • X-Sim: Cross-Embodiment Learning via Real-to-Sim-to-Real
  • SAIL: Faster-than-Demonstration Execution of Imitation Learning Policies
  • Streaming Flow Policy: Simplifying diffusion/flow-matching policies by treating action trajectories as flow trajectories
  • Steering Your Diffusion Policy with Latent Space Reinforcement Learning
  • Real2Render2Real: Scaling Robot Data Without Dynamics Simulation or Robot Hardware
  • DexSkin: High-Coverage Conformable Robotic Skin for Learning Contact-Rich Manipulation
  • Learning Visuotactile Skills with Two Multifingered Hands
  • 3D-ViTac: Learning Fine-Grained Manipulation with Visuo-Tactile Sensing
  • GELLO: A General, Low-Cost, and Intuitive Teleoperation Framework for Robot Manipulators

Copyright 2024, Jung Yeon Lee