Views Navigation

Event Views Navigation

Today
  • 2018 IEEE Symposium on Deep Learning (IEEE DL’18) @ SSCI 2018.

    BENGALURU BENGALURU, India

    Description: Deep Learning (DL) is growing in popularity because it solves complex problems in machine learning by exploiting multi scale, multi-layer architectures making better use of the data patterns. Multi-scale machine perception tasks such as object and speech recognitions using DL have recently outperformed systems that have been under development for many years. The principles of DL, and its ability to capture multi scale representations, are very general and the technology can be applied to many other problem domains, which makes it quite attractive. Many open problems and challenges still exists, e.g. interpretability, computational and time costs, repeatability of the results, convergence, ability to learn from a very small amount of data, to evolve dynamically/continue to learn, etc. The Symposium will provide a forum for discussing new DL advances, challenges, brainstorming new solutions and directions between top scientists, researchers, professionals, practitioners and students with an interest in DL and related areas including applications to autonomous transportation, communications, medical, financial services, etc. The manuscripts should be submitted in PDF format. Click Here to know further guidelines for submission. Topics of IEEE DL’18 include but are not limited to: Unsupervised, semi-, and supervised learning Deep reinforcement learning (deep value function estimation, policy learning and stochastic control) Memory Networks and differentiable programming Implementation issues (software and hardware) Dimensionality expansion and sparse modeling Learning representations from large-scale data Multi-task learning Learning from multiple modalities Weakly supervised learning Metric learning and kernel learning Hierarchical models Interpretable DL Fuzzy rule-based DL Non-Iterative DL Recursive DL Repeatability of results in DL Convergence in DL Incremental DL Evolving DL Fast DL Applications in: Image/video Audio/speech Natural language processing Robotics, navigation, control Games Cognitive architectures AI Symposium Web Site

  • Past Events

    Various Venues

    The following events are linked to the old version of the website. INNS Conference on Big Data and Deep Learning 2019  IEEE Symposium on Deep Learning @ SSCI 2018 Special Session on Empowering Deep Learning Models @ WCCI (IJCNN) 2018 Special Session on Deep Learning for Structured and Multimedia Information @ WCCI 2018 Special Session on Interpretable Deep Learning Classifiers @ WCCI 2018 IEEE Symposium on Deep Learning @ SSCI 2017 Special Session on Deep Learning @ ESANN 2016 Special Session on Deep Learning, Medical Imaging, and Translational Medicine @ IJCNN 2016 Special Session on Theoretical Foundations of Deep Learning Models and Algorithms @ IJCNN 2016 Special Session on Deep Learning for Big Multimedia Understanding @ IJCNN 2016 Special Session on Deep Learning for Brain-Like Computing and Pattern Recognition @ IJCNN 2016

  • INNS CONFERENCE ON BIG DATA AND DEEP LEARNING 2019

    Sestri Levante (GE), Italy Sestri Levante (GE), Italy

    VENUE and ORGANIZATION The 2019 INNS Big Data and Deep Learning (INNSBDDL 2019) conference will be held in Sestri Levante, Italy, April 16-18, 2019. The conference is organized by the International Neural Network Society, with the aim of representing an international meeting for researchers and other professionals in Big Data, Deep Learning and related areas. It will feature invited plenary talks by world-renowned speakers in the area, in addition to regular and special technical sessions with oral and poster presentations. Moreover, workshops and tutorials will also be featured. SCOPE We solicit both solid contributions or preliminary results which show the potentiality and the limitations of new ideas, refinements, or contaminations in any aspect of Big Data and Deep Learning. Both theoretical and practical results are welcome. Example topics of interest includes but is not limited to the following: Big Data Science and Foundations Novel Theoretical Models for Big Data New Computational Models for Big Data Data and Information Quality for Big Data Big Data Mining Social Web Mining Data Acquisition, Integration, Cleaning, and Best Practices Visualization Analytics for Big Data Computational Modeling and Data Integration Large-scale Recommendation Systems and Social Media Systems Cloud/Grid/StreamData Mining Big Velocity Data Link and Graph Mining Semantic-based Data Mining and Data Pre-processing Mobility and Big Data Multimedia and Multi-structured Data-Big Variety Data Modern Practical Deep Networks Deep Feedforward Networks Regularization for Deep Learning Optimization for Training Deep Models Convolutional Networks Sequence Modeling: Recurrent and Recursive Nets Practical Methodology Deep Learning Research Linear Factor Models Autoencoders Representation Learning Structured Probabilistic Models for Deep Learning Monte Carlo Methods Confronting the Partition Function Approximate Inference Deep Generative Models IMPORTANT DATES Deadline for submission of tutorial and workshop proposals: August 31, 2018 Notification of tutorial and workshop proposals: September 30, 2018 Deadline of full paper submission: October 31, 2018 Notification of paper acceptance: December 31, 2018 Camera-ready submission: January 31, 2019 Early registration deadline: January 15, 2019 Registration deadline: January 31, 2019 Conference date: April 16-18, 2019 PROCEEDINGS and SPECIAL ISSUE Works submitted as a regular paper will be published in a serie indexed by Scopus. Submitted papers will be reviewed by some PC members based on technical quality, relevance, originality, significance and clarity. At least one author of an accepted submission should register to present their work at the conference. Selected papers presented at INNS BDDL 2019 will be included in special issues of top journals in the field (prospected journals: Big Data Research, Transaction on Neural Networks and Learning System, Neurocomputing, etc).

  • Special Session: Learning Representations for Structured Data @ IJCNN 2019

    Intercontinental Budapest Hotel Budapest, Hungary

    Learning Representations for Structured Data is a special session at the 2019 International Joint Conference on Neural Networks(IJCNN), that will be held at the InterContinental Budapest Hotel  in Budapest, Hungary on July 14-19, 2019. Call for papers Structured data, e.g. sequences, trees and graphs, are a natural  representation for compound information made of atomic information pieces (i.e. the nodes and their labels) and their intertwined relationships, represented by the edges (and their labels).  Sequences are simple structures representing linear dependencies such as in genomics and proteomics, or with time series data. Trees, on the other hand, allow to model hierarchical contexts and relationships, such as with natural language sentences, crystallographic structures, images. Graphs are the most general and complex form of structured data allowing to represent networks of interacting elements, e.g. in social graphs or metabolomics, as well as data where topological variations influence the feature of interest, e.g. molecular compounds.  Being able to process data in these rich structured forms provides a fundamental advantage when it comes to identifying data patterns suitable for predictive and/or explorative analyses. This has motivated a recent increasing interest of the machine learning community into the development of learning models for structured information. On the other hand,  recent improvements in the predictive performances shown by machine learning methods is due to their ability, in contrast to traditional approaches, to learn a “good” representation for the task under consideration. Deep Learning techniques are nowadays widespread, since they allow to perform such representation learning in an end-to-end fashion. Nonetheless, representations learning is becoming of great importance in other areas, such in kernel-based and probabilistic models. It has also been shown that, when the data available for the task at hand is limited, it is still beneficial to resort to representations learned in an unsupervised fashion, or on different, but related, tasks. This session focuses on learning representation for structured data such as sequences, trees, graphs, and relational data. Topics that are of interest to this session include, but are not limited to: Probabilistic models for structured data Structured output generation (probabilistic models, variational autoencoders, adversarial training, …) Deep learning and representation learning for structures Learning with network data Recurrent, recursive and contextual models Reservoir computing and randomized neural networks for structures Kernels for structured data Relational deep learning Learning implicit representations Applications of adaptive structured data processing: e.g. Natural Language Processing, machine vision (e.g. point clouds as graphs), materials science, chemoinformatics, computational biology, social networks. Important Dates Paper Submissions: December 15, 2018 Paper Acceptance Notifications: January 30, 2019 Session Organisers Davide Bacciu, University of Pisa Thomas Gärtner, University of Nottingham Nicolò Navarin, University of Padova Alessandro Sperduti, University of Padova Special Session website.

  • IEEE Symposium on Deep Learning (IEEE DL’19) @ SSCI 2019.

    Seaview Resort Xiamen Xiamen, China

    DL is a symposium in the IEEE Symposium Series on Computational Intelligence, December 6-9, 2019 Xiamen, China. Description Deep Learning (DL) is growing in popularity because it solves complex problems in machine learning by exploiting multi scale, multi-layer architectures making better use of the data patterns. Multi-scale machine perception tasks such as object and speech recognitions using DL have recently outperformed systems that have been under development for many years. The principles of DL, and its ability to capture multi scale representations, are very general and the technology can be applied to many other problem domains, which makes it quite attractive. Many open problems and challenges still exists, e.g. interpretability, computational and time costs, repeatability of the results, convergence, ability to learn from a very small amount of data, to evolve dynamically/continue to learn, etc. The Symposium will provide a forum for discussing new DL advances, challenges, brainstorming new solutions and directions between top scientists, researchers, professionals, practitioners and students with an interest in DL and related areas including applications to autonomous transportation, communications, medical, financial services, etc. Topics Topics of IEEE DL’19 include but are not limited to: Unsupervised, semi-, and supervised learning Deep reinforcement learning (deep value function estimation, policy learning and stochastic control) Memory Networks and differentiable programming Implementation issues (software and hardware) Dimensionality expansion and sparse modeling Learning representations from large-scale data Multi-task learning Learning from multiple modalities Weakly supervised learning Metric learning and kernel learning Hierarchical models Interpretable DL Fuzzy rule-based DL Non-Iterative DL Recursive DL Repeatability of results in DL Convergence in DL Incremental DL Evolving DL Fast DL Applications in: Image/video Audio/speech Natural language processing Robotics, navigation, control Games Cognitive architectures AI Symposium website

  • Special Session: Tensor Decompositions in Deep Learning@ ESANN 2020

    Bruges Bruges, Belgium

      DESCRIPTION: In the latter years, tensor decompositions have been gaining increasing interest in the Big Data and Machine Learning community.  On the one hand, multiway data analysis provides powerful and efficient methods to address the processing of large scale and highly complex data, such as multivariate sensor signals.  On the other hand, tensor decompositions have been shown to provide the necessary theoretical backbone to study the expressiveness properties of deep neural networks. More recently, tensors have started to find wide application in a variety of machine learning paradigms, ranging from neural networks to probabilistic models, to enable the efficient compression of the model parameters leveraging a wide range of decomposition methods from multilinear algebra. This special session aims to present the state­of­the­art on this increasingly relevant topic among ML theoretician and practitioners. To this end, we welcome both solid contributions and preliminary relevant results showing potential, limitations and challenges of new ideas related to the use of tensor decompositions in deep learning, neural networks, and machine learning at large.  Studies stemming from major research initiatives and projects focusing on the session topics are particularly welcome. TOPICS OF INTEREST (non-exhaustive): - Tensor decompositions and multiway data analytic - Tensor decompositions for signal processing - Tensor neural networks - Tensor-based representation and manipulation of neural weights - Decomposing neural architectures through tensor manipulation - Theoretical analysis of neural networks through multiway algebra - Use of tensor decompositions in learning systems - Speeding up neural computations through tensor decompositions - Tensor based approaches for structured data processing (graphs, trees, sequences) - Tensors and dynamic memory (recurrent neural networks) - Applications of tensor neural networks to image and video processing - Applications of tensor neural networks to sensor and stream data analysis SUBMISSION: Prospective authors must submit their paper through the ESANN portal following the instructions provided in www.esann.org. Each paper will undergo a peer reviewing process for its acceptance. Authors should send as soon as possible an e-mail with the tentative title of their contribution to the special session organisers. IMPORTANT DATES: Submission of papers: 18 November 2019 Notification of acceptance: 31 January 2020 ESANN conference: 22-24 April 2020   SPECIAL SESSION ORGANISERS: Davide Bacciu, University of Pisa (Italy) Danilo Mandic, Imperial College (UK)

  • Special Session: Learning Representations for Structured Data @ IJCNN (WCCI) 2020

    Glasgow Glasgow, United Kingdom

    Learning Representations for Structured Data is a special session at the 2020 International Joint Conference on Neural Networks,  World Congress on Computational Intelligence that will be held in Glasgow (UK),  on July 19-24, 2020. Call for papers Structured data, e.g. sequences, trees and graphs, are a natural representation for compound information made of atomic information pieces (i.e. the nodes and their labels) and their relationships, represented by the edges (and their labels).  Graphs are one of the most general and complex forms of structured data allowing to represent networks of interacting elements, e.g. in social graphs or metabolomics, as well as data where topological variations influence the feature of interest, e.g. molecular compounds.  Being able to process data in these rich structured forms provides a fundamental advantage when it comes to identifying data patterns suitable for predictive and/or explorative analyses. This has motivated a recent increasing interest of the machine learning community into the development of learning models for structured information. Thanks to the growing availability of computational resources and data, modern machine learning methods promote flexible representations that can be learned end-to-end from data. For instance, recent deep learning approaches for learning representation on structured data complement the flexibility of data-driven approaches with biases about structures in data, coming from prior knowledge about the problem at hand. Nonetheless, representation learning is becoming of great importance in other areas, such as kernel-based and probabilistic models. It has also been shown that, when the data available for the task at hand is limited, it is still beneficial to resort to representations learned in an unsupervised fashion, or on different, but related, tasks. This session focuses on learning representation for structured data such as sequences, trees, graphs, and relational data. Topics that are of interest to this session include, but are not limited to: Probabilistic models for structured data Structured output generation (probabilistic models, variational autoencoders, adversarial training, …) Deep learning and representation learning for structures Learning with network data Recurrent, recursive and contextual models Reservoir computing and randomized neural networks for structures Kernels for structured data Relational deep learning Learning implicit representations Applications of adaptive structured data processing: e.g. Natural Language Processing, machine vision (e.g. point clouds as graphs), materials science, chemoinformatics, computational biology, social networks. Important Dates Paper Submissions: January 15, 2020 Paper Acceptance Notifications: March 30, 2020 Conference: July 19-24, 2020 Session Organisers Davide Bacciu, University of Pisa Nicolò Navarin, University of Padova Filippo Maria Bianchi, Norwegian Research Centre Thomas Gärtner, TU Wien Alessandro Sperduti, University of Padova For any enquire, please write to: bacciu di.unipi.it or nnavarin math.unipd.it  

  • IEEE Symposium on Deep Learning (IEEE DL’20) @ SSCI 2020.

    Canberra, Australia Canberra, Australia

    DL is a symposium in the IEEE Symposium Series on Computational Intelligence, December 1-4, 2020 Canberra, Australia. Description Deep Learning (DL) is growing in popularity because it solves complex problems in machine learning by exploiting multi scale, multi-layer architectures making better use of the data patterns. Multi-scale machine perception tasks such as object and speech recognitions using DL have recently outperformed systems that have been under development for many years. The principles of DL, and its ability to capture multi scale representations, are very general and the technology can be applied to many other problem domains, which makes it quite attractive. Many open problems and challenges still exists, e.g. interpretability, computational and time costs, repeatability of the results, convergence, ability to learn from a very small amount of data, to evolve dynamically/continue to learn, etc. The Symposium will provide a forum for discussing new DL advances, challenges, brainstorming new solutions and directions between top scientists, researchers, professionals, practitioners and students with an interest in DL and related areas including applications to autonomous transportation, communications, medical, financial services, etc. Topics Topics of IEEE DL’19 include but are not limited to: Unsupervised, semi-, and supervised learning Deep reinforcement learning (deep value function estimation, policy learning and stochastic control) Memory Networks and differentiable programming Implementation issues (software and hardware) Dimensionality expansion and sparse modeling Learning representations from large-scale data Multi-task learning Learning from multiple modalities Weakly supervised learning Metric learning and kernel learning Hierarchical models Interpretable DL Fuzzy rule-based DL Non-Iterative DL Recursive DL Repeatability of results in DL Convergence in DL Incremental DL Evolving DL Fast DL Applications in: Image/video Audio/speech Natural language processing Robotics, navigation, control Games Cognitive architectures AI

  • Special Session on “Complex Data: Learning Trustworthily, Automatically, and with Guarantees” @ ESANN 2021

    Bruges Bruges, Belgium

    Organized by Luca Oneto (University of Genoa, Italy), Nicolò Navarin (University of Padua, Italy), Battista Biggio (University of Cagliari, Italy), Federico Errica (Università di Pisa, Italy), Alessio Micheli (Università di Pisa, Italy), Franco Scarselli (SAILAB - University of Siena, Italy), Monica Bianchini (SAILAB - University of Siena, Italy), Alessandro Sperduti (University of Padua, Italy) Machine Learning (ML) achievements enabled automatic extraction of actionable information from data in a wide range of decision-making scenarios (e.g. health care, cybersecurity, and education). ML models are nowadays ubiquitous pushing even further the process of digitalization and datafication of the real and digital world producing more and more complex and interrelated data. This demands for improving both ML technical aspects (e.g. design and automation) and human-related metrics (e.g. fairness, robustness, privacy, and explainability), with performance guarantees at both levels. The aforementioned scenario posed three main challenges: (i) Learning from Complex Data (i.e. sequence, tree and graph data), (ii) Learning Trustworthily, and (iii) Learning Automatically with Guarantees. The scope of this special session is then to address one or more of these challenges with the final goal of Learning Trustworthily, Automatically, and with Guarantees from Complex Data. Examples of methods and problems in these challenges are: efficient and effective models capable of directly learning from data natively structured or collected from interrelated heterogeneous sources (e.g. social and relational data, knowledge graphs), characterized by entities, attributes, and relationships, without relying on human skills to encode this complexity into a rich and expressive (vectorial) representation; design ML models from a human-centered perspective, making ML trustworthy by design, by removing human biases from the data (e.g. gender discrimination), increasing robustness (e.g. to adversarial data perturbation), preserving individuals’ privacy (e.g. protecting ML models from differential attacks), and increasing transparency (e.g. via ML models and output explanation); automatizing the ML design and deployment parts which are currently handcrafted by highly skilled and trained specialists. For this reason, ML is required to be empowered with self-tuning properties (e.g. architecture and hyperparameter automatic selection), understanding and guaranteeing the final performance (e.g. with worst case and statistical bounds) with respect to both technical and human relevant metrics. The focus of this special session is to attract both solid contributions or preliminary results which show the potentiality and the limitations of new ideas, refinements, or contaminations between the different fields of machine learning and other fields of research in solving real world problems. Both theoretical and practical results are welcome to our special session.

  • IEEE Symposium on Deep Learning (IEEE DL’21) @ SSCI 2021.

    Orlando Orlando, FL, United States

    DL is a symposium in the IEEE Symposium Series on Computational Intelligence, December 4-7, 2021 Orlando, Florida, USA. Description Deep Learning (DL) is growing in popularity because it solves complex problems in machine learning by exploiting multi scale, multi-layer architectures making better use of the data patterns. Multi-scale machine perception tasks such as object and speech recognitions using DL have recently outperformed systems that have been under development for many years. The principles of DL, and its ability to capture multi scale representations, are very general and the technology can be applied to many other problem domains, which makes it quite attractive. Many open problems and challenges still exists, e.g. interpretability, computational and time costs, repeatability of the results, convergence, ability to learn from a very small amount of data, to evolve dynamically/continue to learn, etc. The Symposium will provide a forum for discussing new DL advances, challenges, brainstorming new solutions and directions between top scientists, researchers, professionals, practitioners and students with an interest in DL and related areas including applications to autonomous transportation, communications, medical, financial services, etc. Topics Topics of IEEE DL’21 include but are not limited to: Unsupervised, semi-, and supervised learning Deep reinforcement learning (deep value function estimation, policy learning and stochastic control) Memory Networks and differentiable programming Implementation issues (software and hardware) Dimensionality expansion and sparse modeling Learning representations from large-scale data Multi-task learning Learning from multiple modalities Weakly supervised learning Metric learning and kernel learning Hierarchical models Interpretable DL Fuzzy rule-based DL Non-Iterative DL Recursive DL Repeatability of results in DL Convergence in DL Incremental DL Evolving DL Fast DL Applications in: Image/video Audio/speech Natural language processing Robotics, navigation, control Games Cognitive architectures AI