🐲 About Xulong

Hello! I’m Xulong Tang, you can call me Ben!

I am a third-year PhD student in the Department of Computer Science at The University of Texas at Dallas(UTD). My research is primarily advised by Professor Rawan Alghofaili and co-advised by Professor Balakrishnan Prabhakaran.

Research Interests:

  • 3D Human Dance Motion Generation
  • Extended Reality (XR), including Virtual Reality, Augmented Reality, and Mixed Reality
  • Computer Vision
  • Multimedia Retrieval
  • Multimodal Learning

In addition to my research, I am also a game developer:

I have used Unity to create a VR educational game for teaching children about the universe at the Sci-Tech Discovery Center.

I have also contributed to the development of an unreleased third-person survival shooter game using Unreal Engine and C++, where I was responsible for gameplay and user interface.

πŸ”₯ News

  • 2026.03: Β πŸŽ‰πŸŽ‰ One paper was conditionally accepted at SIGGRAPH 2026.
  • 2026.03: Β πŸŽ‰πŸŽ‰ One paper was accepted at CVPR Workshop 2026.
  • 2026.01: Β πŸŽ‰πŸŽ‰ One paper was accepted at IEEE VR 2026.
  • 2025.09: Β πŸŽ‰πŸŽ‰ One paper was accepted at NeurIPS 2025.
  • 2025.07: Β πŸŽ‰πŸŽ‰ One paper was accepted at ACM Multimedia 2025.
  • 2024.06: Β πŸŽ‰πŸŽ‰ Two papers were accepted at ICMR 2024.
  • 2024.01: Β πŸŽ‰πŸŽ‰ Started my PhD at The University of Texas at Dallas.

πŸ“ Publications

* denotes equal contribution.

IEEE VR 2026 Personalized Dance Synthesis Based on Physical and Cognitive Intensities preview

Personalized Dance Synthesis Based on Physical and Cognitive Intensities

Xulong Tang, Eun Yeo, Ruiyu Mao, Xiaohu Guo, Rawan Alghofaili

CVPR Workshop 2026 TokenDance: Token-to-Token Music-to-Dance Generation with Bidirectional Mamba preview

TokenDance: Token-to-Token Music-to-Dance Generation with Bidirectional Mamba

Ziyue Yang, Kaixing Yang, Xulong Tang

NeurIPS 2025 MEGADance: Mixture-of-Experts Architecture for Genre-Aware 3D Dance Generation preview

MEGADance: Mixture-of-Experts Architecture for Genre-Aware 3D Dance Generation

Kaixing Yang*, Xulong Tang*, Ziqiao Peng, Yuxuan Hu, Jun He, Hongyan Liu

ACM MM 2025 CoheDancers: Interactive Group Dance Generation via Music-Driven Coherence Decomposition preview

CoheDancers: Interactive Group Dance Generation via Music-Driven Coherence Decomposition

Kaixing Yang*, Xulong Tang*, Haoyu Wu, Qinliang Xue, Biao Qin, Hongyan Liu, Zhaoxin Fan

ICMR 2024 CoDancers: Music-Driven Coherent Group Dance Generation with Choreographic Unit preview

CoDancers: Music-Driven Coherent Group Dance Generation with Choreographic Unit

Kaixing Yang*, Xulong Tang*, Ran Diao, Hongyan Liu, Jun He, Zhaoxin Fan

ICMR 2024 BeatDance: A Beat-Based Model-Agnostic Contrastive Learning Framework for Music-Dance Retrieval preview

BeatDance: A Beat-Based Model-Agnostic Contrastive Learning Framework for Music-Dance Retrieval

Kaixing Yang, Xukun Zhou, Xulong Tang, Ran Diao, Hongyan Liu, Jun He, Zhaoxin Fan

πŸ“– Education

  • 2024.01 - present, PhD in Computer Science, The University of Texas at Dallas, Richardson, Texas
  • 2023.09 - 2024.01, Master of Computer Science, The University of Texas at Dallas, Richardson, Texas
  • 2018.09 - 2022.06, Undergraduate, Xidian University, Xi’an, China

πŸ’Ό Professional Experience

Reviewer

  • CVPR Β  (2025–2026)
  • ECCV Β  (2026)
  • IEEE Transactions on Computers Β  (2024–2025)
  • IEEE Virtual Reality Β  (2024–2025)

Instructor

  • Augmented Reality 3D User Interface Design Workshop Β  (2024)
  • Human-Centered eXtended Reality Lab Β  (2024)

Teaching Assistant

  • CS4332: Introduction to Programming Video Games (2025)
  • CS6331: Human-Computer Interaction (2024)

πŸ’» Internships

  • 2021.12 - 2022.06, Unreal Engine Developer, Shengqu Games, Shanghai, China