Learning at the Edge: an Extreme Value Theory for Visual Recognition
Measuring Human Reaction Time to Improve Open-Set Recognition (OSR)
The human ability to recognize that something is new currently outperforms all machine models on visual recognition tasks which include some aspect of novelty. Human perception as measured by the methods and procedures of visual psychophysics from psychology can provide vital information for detecting novelty in visual recognition tasks. In this project, we design and perform a large-scale experiment to collect over 900,000 human reaction time measurements and design a novel loss function for Open-Set Recognition task.
Human Activity Recognition in an Open World
Managing novelty in human activity recognition (HAR) is critical in realistic settings to improve task performance and ensure solution generalization outside of prior seen samples. Novelty manifests in HAR as unseen samples, activities, objects, environments, and sensor changes, among other ways. Novelty may be task-relevant, such as a new class or new features, or task-irrelevant resulting in nuisance novelty, such as never before seen noise, blur, or distorted video recordings. To perform HAR optimally, algorithmic solutions must be tolerant to nuisance novelty, and learn over time in the face of novelty. During this summer internship, we did a pilot study on HAR with novelty and accomplished experiments to compare the performance on several datasets. We also explored incremental learning for HAR based on X3D Net and the EVM.
Deep Learning for Tumor Classification on Frozen Sections
Med-A-Nets: Simultaneous Segmentation of Multiple Organs with Deep Adversarial Networks
Deep learning has become a dominant powerful technique in solving a wide variety of tasks and one of the mainstream has primarily focused on the application of deep learning driven medical image analysis.
In this project, we introduce a novel adversarial training strategy to train deep neural networks for the segmentation of multiple organs present in an image simultaneously. We developed a novel deep adversarial network, named Med-A-Nets, that jointly train a set of convolution neural network (CNN) and an adversarial discriminator for the robust segmentation of multiple organs observed in images. By addressing the challenges posed by segmentation of multi-organs medical image, Med-A-Nets demonstrates superior performance on standard datasets.