本站大事记   |  收藏本站
高级检索  全文检索  
当前位置:   本站首页   >   讲座预告   >   正文

Adversarial Machine Learning: an intruduction and tutorial

发布日期:2019-12-20     作者:人工智能学院      编辑:桑宇琦     点击:

报告题目:Adversarial Machine Learning: an intruduction and tutorial

报 告 人:马兴军 博士

报告时间:2019年12月24日 下午13:30

报告地点:人工智能学院 中心校区行政楼601

报告摘要:Deep learning has become increasingly popular in the past few years. This is largely attributed to a family of powerful models called deep neural networks (DNNs). With many stacked layers, and millions of neurons, DNNs are capable of learning complex non-linear mappings, and have demonstrated near or even surpassing human-level performance in a wide range of applications such as image classification, object detection, natural language processing, speech recognition self-driving cars,playing games or medical diagnosis. Despite their great success, DNNs have recently been found vulnerable to adversarial examples (or attacks), which are input instances slightly modified in a way that is intended to fool the model. Such a surprising weakness of DNNs has raised security and reliability concerns on the development of deep learning systems in safety-critical scenarios such as face recognition, autonomous driving and medical diagnosis. Since the first discovery, this has attracted a huge volume of work on either attacking or defending DNNs against these attacks. In this tutorial, we will introduce this adversarial phenomenon, explanations to this phenomenon, and techniques that have been developed for both attack and defense.

报告人简介:马兴军,优秀吉大校友。2010年吉大软件学院本科,2015年清华软件学院硕士,2019年澳大利亚墨尔本大学计算机系博士,2019年至今墨尔本大学助理讲师。从事机器学习、深度学习相关研究,重点研究深度学习中的安全问题:Adversarial Machine Learning.先后在顶级会议ICML/ICLR/CVPR/ICCV/AAAI/IJCAI发表论文10余篇,其中多篇被选为口头报告(Oral)论文,如ICLR2018,ICML2018/2019等。个人主页:http://xingjunma.com/。

主办单位:吉林大学人工智能学院

我要评论:
 匿名发布 验证码 看不清楚,换张图片
0条评论    共1页   当前第1

相关文章

  • 读取内容中,请等待...

地址:吉林省长春市前进大街2699号
E-mail:jluxinmeiti@163.com
Copyright©2012 All rights reserved.
吉林大学党委宣传部 版权所有

手机版