Maintenance window scheduled to begin at February 14th 2200 est. until 0400 est. February 15th

(e.g. yourname@email.com)

Forgot Password?

    Defense Visual Information Distribution Service Logo

    Dr. Chou Hung

    Advanced Embed Example

    Add the following CSS to the header block of your HTML document.

    Then add the mark-up below to the body block of the same document.

    UNITED STATES

    11.18.2024

    Video by Kevin D Schmidt 

    Air Force Research Laboratory

    Description: In this edition of QuEST, Dr. Chou P. Hung will lead the 20 October discussion with From Human to Neuromorphic HDR Recognition.

    Key Moments and Questions in the video include:

    Army Research Office fundamental research
    Neurophysiology of cognition research: Humans in complex systems
    Non-medically oriented research to enable discovery
    Basic research opportunities:
    Evolutionary and Revolutionary Interactions
    Neural Computation
    Understand how biology works to design better human/machine teams
    AI Intelligence and Machine Learning Roadmap
    Autonomy for next combat generation vehicle
    Evolutionary and Revolutionary Interactions (With real and mixed worlds)
    Unstructured and structured real-world environments
    Abstract cognitive phenomena such as anticipatory sensing, automatic learning, complex decision-making, and rapid adaptive action
    Improve cognitive performance to avoid cognitive failures
    AI and gaming–decision and interaction complexity
    Neural Computation, information coding, and translation
    Learn and adapt from few examples
    Multiscale information processing dynamics mediating computations
    How do we get artificial systems to talk to neural networks?
    Neuromorphic for autonomous flying
    Autonomous flying under HDR luminance
    Neuromorphic pre-processing
    LOIHI II



    Audience questions:
    Do you bring in to bear on this broad platform an ability to really look at different ways the AI system would work. You know, let's say there's 3 to 5 different types of reasoning, and each one is a valuable path to explore and investigate. So yeah, that would be great flexibility, and what does this platform bring to bear?
    I was wondering about the size of the patches and maybe the distance between them and if you saw any effect on your results on the previous slide to those kinds of variations?
    Did you test these scenes with humans?
    When you say you're comparing, you got significance, are you comparing the non-pre-processed to the pre-processed?
    earlier on you were talking about, you know, measuring fatigue and the and the soldier to feed that to the AI, have that-s that something that you worked on? Or is that just something that's going to need to happen eventually?
    Your original premise is that human vision does this sort of preprocessing, is that true?
    If you replicate this using the intel chip, would those images be easier for humans to consume?
    Did you ever attempt to do an end-to-end tuning of the preprocessing and localization?
    You said that training was done on the pre-processing for the bespoke detector model, but you actually tested on the same model. You just saw the unprocessed images using the same?
    Did you do any augmentation?

    VIDEO INFO

    Date Taken: 11.18.2024
    Date Posted: 11.26.2024 14:27
    Category: Video Productions
    Video ID: 945040
    VIRIN: 231020-F-BA826-3736
    Filename: DOD_110705689
    Length: 00:59:28
    Location: US

    Video Analytics


    Downloads: 2
    High-Res. Downloads: 2

    PUBLIC DOMAIN