Search

Contest 1 Winner: Team USF-EE

Updated: Dec 1, 2020

Winning Topic: Topic 2: Information Fusion Across Extracted Data and Generation of Live/Dynamic Information Representation Across 3 or More Data Categories - $30,000 prize


Team Summary: Team USF-EE consists of two Ph.D. students (Mehmet Aktukmak and Keval Doshi) and two faculty advisors (Ismail Uysal and Yasin Yilmaz) from the Electrical Engineering Department at the University of South Florida. Mr. Aktukmak and Mr. Doshi mainly work on multimodal data fusion and video analytics, respectively. The research interests of Dr. Uysal (Ph.D., University of Florida, 2008) include machine learning, IoT, and RFID technologies. Dr. Yilmaz’s (Team Lead, Ph.D., Columbia University, 2014) research focuses on real-time machine learning for streaming and multimodal data.


Yasin Yilmaz is a member of a winning Contest 1 team. He joined the ASAPS team via Zoom to answer some questions about his team’s winning solution and give some insight about the ASAPS Challenge.

To begin with, can you give us an idea of what your team’s approach was to the first contest?


Yasin: We were already doing research in data fusion and real-time event detection. I can say that the umbrella term for our research is mission learning for streaming data; this is our current focus now. The real-time event detection part mainly focuses on video right now in our group, we are dealing with a video anomalous detection, detecting anomalous events in video, but we all also worked on other data sets.


A challenge is that the data is not likely to be synchronized, and so we will have to wrestle with trying to identify the health and resynchronize the data within your algorithms. We are doing that in this challenge because it really represents the real world and how data flows. What are your thoughts about managing unsynchronized data?


Yasin: The practical challenge falls under the robustness. You will not always have all the data available those data will be different, and you will not have such regular data. Maybe the most regular data will be video, the main data in our context. From time to time, you will have text and audio and it will create a real challenge. If the algorithm is not robust it assumes that everything has the same flow. That is not realistic, obviously. The algorithm should be able to handle in a fair way, this data. When I say fair, I also mean that we should not lose the valuable information that comes from other modalities. If there is critical information in text, which comes irregularly from time to time, the algorithm should be picking those up promptly. That is one of our goals, making the algorithm robust and handle the critical data imbalance.


What do you think is the most challenging aspect for you and this challenge? What do you find particularly difficult or interesting?


Yasin: The interesting part is the combination of data fusion and a real-time event detection from streaming data. I am happy to see that you will release such data sets because there is no such available data set. We know that there is a nature of motivation for such problems, but there was no data available. That is the interesting part for us. The challenging part will be that this is a real deal and there will be a lot of practical challenges. In theory, everything might work nice and well, but I think we will have great challenges in terms of making things work, implementing the algorithms and dealing with all challenges coming from the data side. The real downside, I can show that my algorithms work on some simulated data or some small-scale data, but here we will have large-scale, multi-modal, real world data. This will be quite challenging here, a practical problem.


78 views

Recent Posts

See All

Contest 2 Announcement

The official launch of the Automated Streams Analysis for Public Safety (ASAPS) Contest 2 is now planned for spring of 2021. The ASAPS Challenge team remains committed to ensuring that contestants hav