VCG authors will be presenting 3 papers in NeurIPS and EMNLP 2024. CONTRAST: Continual Multi-Source Adaptation to Dynamic Distributions, Sk M. Ahmed, F. F. Niloy, D. Raychaudhuri, X. Chang, S. Oymak, A. Roy-Chowdhury NeurIPS 2024 Selective Attention: Enhancing Transformer through Principled Context Control, X. Zhang, X. Chang, M. Li, A. Roy-Chowdhury, J. Chen, S. Oymak...
Three papers accepted to ICCV 2023. 1. Prior-guided Source-free Domain Adaptation for Human Pose Estimation, D. Raychaudhuri, C-K Ta, A. Dutta, R. Lal, A. Roy-Chowdhury 2. SUMMIT: Source-Free Adaptation of Uni-Modal Models to Multi-Modal Targets, C. Simons, D. Raychaudhuri, Sk. M. Ahmed, S. You, K. Karydis, A. Roy-Chowdhury 3. Efficient Controllable Multi-Task Architectures, A. Aich...
The task of dynamic scene graph generation (SGG) from videos is complicated and challenging due to the inherent dynamics of a scene, temporal fluctuation of model predictions, and the long-tailed distribution of the visual relationships in addition to the already existing challenges in image-based SGG. Existing methods for dynamic SGG have primarily focused on capturing...
The majority of methods for crafting adversarial attacks have focused on scenes with a single dominant object (e.g., images from ImageNet). On the other hand, natural scenes include multiple dominant objects that are semantically related. Thus, it is crucial to explore designing attack strategies that look beyond learning on single-object scenes or attack single-object victim...
Cost-effective depth and infrared sensors as alternatives to usual RGB sensors are now a reality and have some advantages over RGB in domains like autonomous navigation and remote sensing. Building computer vision and deep learning systems for depth and infrared data are crucial. However, large labeled datasets for these modalities are still lacking. In such...
Image enhancement approaches often assume that the noise is signal independent, and approximate the degradation model as zero-mean additive Gaussian noise. However, this assumption does not hold for biomedical imaging systems where sensor-based sources of noise are proportional to signal strengths, and the noise is better represented as a Poisson process. The MICCAI paper explores...
Multi-task learning commonly encounters competition for resources among tasks, specifically when model capacity is limited. This challenge motivates models which allow control over the relative importance of tasks and total compute cost during inference time. In this work, we propose such a controllable multi-task network that dynamically adjusts its architecture and weights to match the...
The majority of the existing attack mechanisms today are targeted toward misclassifying specific objects and activities. However, most scenes contain multiple objects and there is usually some relationship among the objects in the scene, e.g., certain objects co-occur more frequently than others. This is often referred to as context in computer vision. We have shown...
VCG researchers are involved in two projects related to machine learning for robot autonomy. The first, funded under the National Robotics Initiative, will help monitor the health of crops. It is a collaboration with UC Merced. The second, funded by the UC Office of the President, will study the impact of agricultural technology on workforce...
Given a text query, retrieving the relevant video segments is an important and challenging problem. While many methods have looked at this problem for individual videos, our work is the first that considers a video corpus. We propose a hierarchical approach that considers intra-video and inter-video semantic relationships. More details can be found in our...
1. The paper Unsupervised Multi-source Domain Adaptation Without Access to Source Data, accepted as oral in CVPR 2021, proposes the Data frEe multi-sourCe unsupervISed domain adaptatiON (DECISION), which identifies the optimal blend of source models with no source data to generate the target model by optimizing a carefully designed unsupervised loss. Under intuitive assumptions, it...
The paper titled " Spatio-Temporal Representation Factorization for Video-based Person Re-Identification" proposed a flexible new computational unit, Spatio-Temporal Representation Factorization module (STRF), that can be used in conjunction with most existing 3D convolutional neural network architectures for re-ID. The key innovations of STRF over prior works include explicit pathways for learning discriminative temporal and spatial...
Can you predict an activity that may occur in the near future and provide a natural language description of it? See our recent journal paper on this topic.
Amit Roy-Chowdhury is leading a team of ECE and CSE faculty that has received a grant totaling almost $1 million from the Defense Advanced Research Projects Agency, or DARPA, to understand the vulnerability of computer vision systems to adversarial attacks. The project is part of the Techniques for Machine Vision Disruption program, which is part...
Papers on High-Frame Rate Video Enhancement, Adversarial Knowledge Transfer and Video Fast-Forwarding accepted to ACM-MM 2020. The paper, ALANET: Adaptive Latent Attention Network for Joint Video Deblurring and Interpolation, accepted as oral in ACM-MM 2020, proposes the Adaptive Latent Attention Network (ALANET), to synthesize sharp high frame-rate videos by jointly performing the task of both...
Three papers on different topics were accepted to ECCV 2020 The paper titled " Domain Adaptive Semantic Segmentation Using Weak Labels" proposes a unified framework for both unsupervised domain adaptation when using the estimated pseudo-weak labels and the novel scenario of weakly-supervised adaptation when using weak labels acquired from human annotators. This work bridges the...
Papers on Non-Adversarial Video Synthesis with Learned Priors and Camera On-boarding for Person Re-identification using Hypothesis Transfer Learning are accepted to CVPR 2020. 1. The paper, Non-Adversarial Video Synthesis with Learned Priors, in CVPR 2020 proposes an approach to generate videos from latent noise vectors, without any reference input frames by jointly optimizing the input...