
BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//DeepLearning - ECPv6.15.11//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://web.math.unipd.it/deeplearning
X-WR-CALDESC:Events for DeepLearning
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:UTC
BEGIN:STANDARD
TZOFFSETFROM:+0000
TZOFFSETTO:+0000
TZNAME:UTC
DTSTART:20150101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;VALUE=DATE:20211204
DTEND;VALUE=DATE:20211208
DTSTAMP:20260417T190535
CREATED:20210222T132223Z
LAST-MODIFIED:20210222T132828Z
UID:213-1638576000-1638921599@web.math.unipd.it
SUMMARY:IEEE Symposium on Deep Learning (IEEE DL’21) @ SSCI 2021.
DESCRIPTION:DL is a symposium in the IEEE Symposium Series on Computational Intelligence\, December 4-7\, 2021 Orlando\, Florida\, USA. \n \nDescription\nDeep Learning (DL) is growing in popularity because it solves complex problems in machine learning by exploiting multi scale\, multi-layer architectures making better use of the data patterns. Multi-scale machine perception tasks such as object and speech recognitions using DL have recently outperformed systems that have been under development for many years. The principles of DL\, and its ability to capture multi scale representations\, are very general and the technology can be applied to many other problem domains\, which makes it quite attractive. Many open problems and challenges still exists\, e.g. interpretability\, computational and time costs\, repeatability of the results\, convergence\, ability to learn from a very small amount of data\, to evolve dynamically/continue to learn\, etc. The Symposium will provide a forum for discussing new DL advances\, challenges\, brainstorming new solutions and directions between top scientists\, researchers\, professionals\, practitioners and students with an interest in DL and related areas including applications to autonomous transportation\, communications\, medical\, financial services\, etc. \nTopics\nTopics of IEEE DL’21 include but are not limited to: \n\nUnsupervised\, semi-\, and supervised learning\nDeep reinforcement learning (deep value function estimation\, policy learning and stochastic control)\nMemory Networks and differentiable programming\nImplementation issues (software and hardware)\nDimensionality expansion and sparse modeling\nLearning representations from large-scale data\nMulti-task learning\nLearning from multiple modalities\nWeakly supervised learning\nMetric learning and kernel learning\nHierarchical models\nInterpretable DL\nFuzzy rule-based DL\nNon-Iterative DL\nRecursive DL\nRepeatability of results in DL\nConvergence in DL\nIncremental DL\nEvolving DL\nFast DL\nApplications in:\n\nImage/video\nAudio/speech\nNatural language processing\nRobotics\, navigation\, control\nGames\nCognitive architectures\nAI
URL:https://web.math.unipd.it/deeplearning/event/ieee-symposium-on-deep-learning-ieee-dl21-ssci-2021/
LOCATION:Orlando\, Orlando\, FL\, United States
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20211006
DTEND;VALUE=DATE:20211009
DTSTAMP:20260417T190535
CREATED:20210222T132638Z
LAST-MODIFIED:20210222T132733Z
UID:216-1633478400-1633737599@web.math.unipd.it
SUMMARY:Special Session on "Complex Data: Learning Trustworthily\, Automatically\, and with Guarantees" @ ESANN 2021
DESCRIPTION:Organized by Luca Oneto (University of Genoa\, Italy)\, Nicolò Navarin (University of Padua\, Italy)\, Battista Biggio (University of Cagliari\, Italy)\, Federico Errica (Università di Pisa\, Italy)\, Alessio Micheli (Università di Pisa\, Italy)\, Franco Scarselli (SAILAB – University of Siena\, Italy)\, Monica Bianchini (SAILAB – University of Siena\, Italy)\, Alessandro Sperduti (University of Padua\, Italy)\nMachine Learning (ML) achievements enabled automatic extraction of actionable information from data in a wide range of decision-making scenarios (e.g. health care\, cybersecurity\, and education). ML models\nare nowadays ubiquitous pushing even further the process of digitalization and datafication of the real and digital world producing more and more complex and interrelated data. This demands for improving both\nML technical aspects (e.g. design and automation) and human-related metrics (e.g. fairness\, robustness\, privacy\, and explainability)\, with performance guarantees at both levels. \n \nThe aforementioned scenario posed three main challenges: (i) Learning from Complex Data (i.e. sequence\, tree and graph data)\, (ii) Learning Trustworthily\, and (iii) Learning Automatically with Guarantees. The scope of this special session is then to address one or more of these challenges with the final goal of Learning Trustworthily\, Automatically\, and with Guarantees from Complex Data. \nExamples of methods and problems in these challenges are: \n\n\n\nefficient and effective models capable of directly learning from data natively structured or collected from interrelated heterogeneous sources (e.g. social and relational data\, knowledge graphs)\, characterized by entities\, attributes\, and relationships\, without relying on human skills to encode this complexity into a rich and expressive (vectorial) representation;\ndesign ML models from a human-centered perspective\, making ML trustworthy by design\, by removing human biases from the data (e.g. gender discrimination)\, increasing robustness (e.g. to adversarial data perturbation)\, preserving individuals’ privacy (e.g. protecting ML models from differential attacks)\, and increasing transparency (e.g. via ML models and output explanation);\nautomatizing the ML design and deployment parts which are currently handcrafted by highly skilled and trained specialists. For this reason\, ML is required to be empowered with self-tuning properties (e.g. architecture and hyperparameter automatic selection)\, understanding and guaranteeing the final performance (e.g. with worst case and statistical bounds) with respect to both technical and human relevant metrics.\n\n\n\nThe focus of this special session is to attract both solid contributions or preliminary results which show the potentiality and the limitations of new ideas\, refinements\, or contaminations between the different fields of machine learning and other fields of research in solving real world problems. Both theoretical and practical results are welcome to our special session.
URL:https://web.math.unipd.it/deeplearning/event/special-session-on-complex-data-learning-trustworthily-automatically-and-with-guarantees-esann-2021/
LOCATION:Bruges\, Bruges\, Belgium
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20201201
DTEND;VALUE=DATE:20201205
DTSTAMP:20260417T190535
CREATED:20191203T170213Z
LAST-MODIFIED:20200227T174101Z
UID:173-1606780800-1607126399@web.math.unipd.it
SUMMARY:IEEE Symposium on Deep Learning (IEEE DL’20) @ SSCI 2020.
DESCRIPTION:DL is a symposium in the IEEE Symposium Series on Computational Intelligence\, December 1-4\, 2020 Canberra\, Australia. \n \nDescription\nDeep Learning (DL) is growing in popularity because it solves complex problems in machine learning by exploiting multi scale\, multi-layer architectures making better use of the data patterns. Multi-scale machine perception tasks such as object and speech recognitions using DL have recently outperformed systems that have been under development for many years. The principles of DL\, and its ability to capture multi scale representations\, are very general and the technology can be applied to many other problem domains\, which makes it quite attractive. Many open problems and challenges still exists\, e.g. interpretability\, computational and time costs\, repeatability of the results\, convergence\, ability to learn from a very small amount of data\, to evolve dynamically/continue to learn\, etc. The Symposium will provide a forum for discussing new DL advances\, challenges\, brainstorming new solutions and directions between top scientists\, researchers\, professionals\, practitioners and students with an interest in DL and related areas including applications to autonomous transportation\, communications\, medical\, financial services\, etc. \nTopics\nTopics of IEEE DL’19 include but are not limited to: \n\nUnsupervised\, semi-\, and supervised learning\nDeep reinforcement learning (deep value function estimation\, policy learning and stochastic control)\nMemory Networks and differentiable programming\nImplementation issues (software and hardware)\nDimensionality expansion and sparse modeling\nLearning representations from large-scale data\nMulti-task learning\nLearning from multiple modalities\nWeakly supervised learning\nMetric learning and kernel learning\nHierarchical models\nInterpretable DL\nFuzzy rule-based DL\nNon-Iterative DL\nRecursive DL\nRepeatability of results in DL\nConvergence in DL\nIncremental DL\nEvolving DL\nFast DL\nApplications in:\n\nImage/video\nAudio/speech\nNatural language processing\nRobotics\, navigation\, control\nGames\nCognitive architectures\nAI
URL:https://web.math.unipd.it/deeplearning/event/ieee-symposium-on-deep-learning-ieee-dl19-ssci-2020/
LOCATION:Canberra\, Australia\, Canberra\, Australia
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20200719
DTEND;VALUE=DATE:20200725
DTSTAMP:20260417T190535
CREATED:20191203T171001Z
LAST-MODIFIED:20191203T171001Z
UID:177-1595116800-1595635199@web.math.unipd.it
SUMMARY:Special Session: Learning Representations for Structured Data @ IJCNN (WCCI) 2020
DESCRIPTION:Learning Representations for Structured Data is a special session at the 2020 International Joint Conference on Neural Networks\,  World Congress on Computational Intelligence that will be held in Glasgow (UK)\,  on July 19-24\, 2020. \n \nCall for papers\nStructured data\, e.g. sequences\, trees and graphs\, are a natural representation for compound information made of atomic information pieces (i.e. the nodes and their labels) and their relationships\, represented by the edges (and their labels).  Graphs are one of the most general and complex forms of structured data allowing to represent networks of interacting elements\, e.g. in social graphs or metabolomics\, as well as data where topological variations influence the feature of interest\, e.g. molecular compounds.  Being able to process data in these rich structured forms provides a fundamental advantage when it comes to identifying data patterns suitable for predictive and/or explorative analyses. This has motivated a recent increasing interest of the machine learning community into the development of learning models for structured information. \nThanks to the growing availability of computational resources and data\, modern machine learning methods promote flexible representations that can be learned end-to-end from data. For instance\, recent deep learning approaches for learning representation on structured data complement the flexibility of data-driven approaches with biases about structures in data\, coming from prior knowledge about the problem at hand. Nonetheless\, representation learning is becoming of great importance in other areas\, such as kernel-based and probabilistic models. It has also been shown that\, when the data available for the task at hand is limited\, it is still beneficial to resort to representations learned in an unsupervised fashion\, or on different\, but related\, tasks. \nThis session focuses on learning representation for structured data such as sequences\, trees\, graphs\, and relational data. Topics that are of interest to this session include\, but are not limited to: \n\nProbabilistic models for structured data\nStructured output generation (probabilistic models\, variational autoencoders\, adversarial training\, …)\nDeep learning and representation learning for structures\nLearning with network data\nRecurrent\, recursive and contextual models\nReservoir computing and randomized neural networks for structures\nKernels for structured data\nRelational deep learning\nLearning implicit representations\nApplications of adaptive structured data processing: e.g. Natural Language Processing\, machine vision (e.g. point clouds as graphs)\, materials science\, chemoinformatics\, computational biology\, social networks.\n\nImportant Dates\nPaper Submissions: January 15\, 2020 \nPaper Acceptance Notifications: March 30\, 2020 \nConference: July 19-24\, 2020 \nSession Organisers\nDavide Bacciu\, University of Pisa \nNicolò Navarin\, University of Padova \nFilippo Maria Bianchi\, Norwegian Research Centre \nThomas Gärtner\, TU Wien \nAlessandro Sperduti\, University of Padova \nFor any enquire\, please write to: bacciu [at] di.unipi.it or nnavarin [at] math.unipd.it \n 
URL:https://web.math.unipd.it/deeplearning/event/special-session-learning-representations-for-structured-data-ijcnn-wcci-2020/
LOCATION:Glasgow\, Glasgow\, United Kingdom
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20200422T080000
DTEND;TZID=UTC:20200424T170000
DTSTAMP:20260417T190535
CREATED:20190905T130942Z
LAST-MODIFIED:20190905T132356Z
UID:152-1587542400-1587747600@web.math.unipd.it
SUMMARY:Special Session: Tensor Decompositions in Deep Learning@ ESANN 2020
DESCRIPTION:  \n\n\n\nDESCRIPTION: In the latter years\, tensor decompositions have been gaining increasing interest in the Big Data and Machine Learning community.  On the one hand\, multiway data analysis provides powerful and efficient methods to address the processing of large scale and highly complex data\, such as multivariate sensor signals.  \n \nOn the other hand\, tensor decompositions have been shown to provide the necessary theoretical backbone to study the expressiveness properties of deep neural networks. More recently\, tensors have started to find wide application in a variety of machine learning paradigms\, ranging from neural networks to probabilistic models\, to enable the efficient compression of the model parameters leveraging a wide range of decomposition methods from multilinear algebra. This special session aims to present the state­of­the­art on this increasingly relevant topic among ML theoretician and practitioners. To this end\, we welcome both solid contributions and preliminary relevant results showing potential\, limitations and challenges of new ideas related to the use of tensor decompositions in deep learning\, neural networks\, and machine learning at large.  Studies stemming from major research initiatives and projects focusing on the session topics are particularly welcome. \nTOPICS OF INTEREST (non-exhaustive): – Tensor decompositions and multiway data analytic – Tensor decompositions for signal processing – Tensor neural networks – Tensor-based representation and manipulation of neural weights – Decomposing neural architectures through tensor manipulation – Theoretical analysis of neural networks through multiway algebra – Use of tensor decompositions in learning systems – Speeding up neural computations through tensor decompositions – Tensor based approaches for structured data processing (graphs\, trees\, sequences) – Tensors and dynamic memory (recurrent neural networks) – Applications of tensor neural networks to image and video processing – Applications of tensor neural networks to sensor and stream data analysis \nSUBMISSION: Prospective authors must submit their paper through the ESANN portal following the instructions provided in www.esann.org. Each paper will undergo a peer reviewing process for its acceptance. Authors should send as soon as possible an e-mail with the tentative title of their contribution to the special session organisers. \nIMPORTANT DATES: \nSubmission of papers: 18 November 2019 \nNotification of acceptance: 31 January 2020 \nESANN conference: 22-24 April 2020 \n  \nSPECIAL SESSION ORGANISERS: \nDavide Bacciu\, University of Pisa (Italy) \nDanilo Mandic\, Imperial College (UK)
URL:https://web.math.unipd.it/deeplearning/event/special-session-tensor-decompositions-in-deep-learning-esann-2020/
LOCATION:Bruges\, Bruges\, Belgium
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20191206
DTEND;VALUE=DATE:20191210
DTSTAMP:20260417T190536
CREATED:20190717T132934Z
LAST-MODIFIED:20190717T134003Z
UID:130-1575590400-1575935999@web.math.unipd.it
SUMMARY:IEEE Symposium on Deep Learning (IEEE DL’19) @ SSCI 2019.
DESCRIPTION:DL is a symposium in the IEEE Symposium Series on Computational Intelligence\, December 6-9\, 2019 Xiamen\, China. \nDescription\nDeep Learning (DL) is growing in popularity because it solves complex problems in machine learning by exploiting multi scale\, multi-layer architectures making better use of the data patterns. Multi-scale machine perception tasks such as object and speech recognitions using DL have recently outperformed systems that have been under development for many years. The principles of DL\, and its ability to capture multi scale representations\, are very general and the technology can be applied to many other problem domains\, which makes it quite attractive. Many open problems and challenges still exists\, e.g. interpretability\, computational and time costs\, repeatability of the results\, convergence\, ability to learn from a very small amount of data\, to evolve dynamically/continue to learn\, etc. The Symposium will provide a forum for discussing new DL advances\, challenges\, brainstorming new solutions and directions between top scientists\, researchers\, professionals\, practitioners and students with an interest in DL and related areas including applications to autonomous transportation\, communications\, medical\, financial services\, etc. \nTopics\nTopics of IEEE DL’19 include but are not limited to: \n\nUnsupervised\, semi-\, and supervised learning\nDeep reinforcement learning (deep value function estimation\, policy learning and stochastic control)\nMemory Networks and differentiable programming\nImplementation issues (software and hardware)\nDimensionality expansion and sparse modeling\nLearning representations from large-scale data\nMulti-task learning\nLearning from multiple modalities\nWeakly supervised learning\nMetric learning and kernel learning\nHierarchical models\nInterpretable DL\nFuzzy rule-based DL\nNon-Iterative DL\nRecursive DL\nRepeatability of results in DL\nConvergence in DL\nIncremental DL\nEvolving DL\nFast DL\nApplications in:\n\nImage/video\nAudio/speech\nNatural language processing\nRobotics\, navigation\, control\nGames\nCognitive architectures\nAI\n\n\n\nSymposium website
URL:https://web.math.unipd.it/deeplearning/event/ieee-symposium-on-deep-learning-ieee-dl19-ssci-2019/
LOCATION:Seaview Resort Xiamen\, Xiamen\, China
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20190714
DTEND;VALUE=DATE:20190720
DTSTAMP:20260417T190536
CREATED:20190717T134252Z
LAST-MODIFIED:20191105T092300Z
UID:139-1563062400-1563580799@web.math.unipd.it
SUMMARY:Special Session: Learning Representations for Structured Data @ IJCNN 2019
DESCRIPTION:Learning Representations for Structured Data is a special session at the 2019 International Joint Conference on Neural Networks(IJCNN)\, that will be held at the InterContinental Budapest Hotel  in Budapest\, Hungary on July 14-19\, 2019.  \nCall for papers\nStructured data\, e.g. sequences\, trees and graphs\, are a natural  representation for compound information made of atomic information pieces (i.e. the nodes and their labels) and their intertwined relationships\, represented by the edges (and their labels).  Sequences are simple structures representing linear dependencies such as in genomics and proteomics\, or with time series data. Trees\, on the other hand\, allow to model hierarchical contexts and relationships\, such as with natural language sentences\, crystallographic structures\, images. Graphs are the most general and complex form of structured data allowing to represent networks of interacting elements\, e.g. in social graphs or metabolomics\, as well as data where topological variations influence the feature of interest\, e.g. molecular compounds.  Being able to process data in these rich structured forms provides a fundamental advantage when it comes to identifying data patterns suitable for predictive and/or explorative analyses. This has motivated a recent increasing interest of the machine learning community into the development of learning models for structured information. \nOn the other hand\,  recent improvements in the predictive performances shown by machine learning methods is due to their ability\, in contrast to traditional approaches\, to learn a “good” representation for the task under consideration. Deep Learning techniques are nowadays widespread\, since they allow to perform such representation learning in an end-to-end fashion. Nonetheless\, representations learning is becoming of great importance in other areas\, such in kernel-based and probabilistic models. It has also been shown that\, when the data available for the task at hand is limited\, it is still beneficial to resort to representations learned in an unsupervised fashion\, or on different\, but related\, tasks. \nThis session focuses on learning representation for structured data such as sequences\, trees\, graphs\, and relational data. Topics that are of interest to this session include\, but are not limited to: \n\nProbabilistic models for structured data\nStructured output generation (probabilistic models\, variational autoencoders\, adversarial training\, …)\nDeep learning and representation learning for structures\nLearning with network data\nRecurrent\, recursive and contextual models\nReservoir computing and randomized neural networks for structures\nKernels for structured data\nRelational deep learning\nLearning implicit representations\nApplications of adaptive structured data processing: e.g. Natural Language Processing\, machine vision (e.g. point clouds as graphs)\, materials science\, chemoinformatics\, computational biology\, social networks.\n\nImportant Dates\nPaper Submissions: December 15\, 2018 \nPaper Acceptance Notifications: January 30\, 2019 \nSession Organisers\n\nDavide Bacciu\, University of Pisa\nThomas Gärtner\, University of Nottingham\nNicolò Navarin\, University of Padova\nAlessandro Sperduti\, University of Padova\n\nSpecial Session website.
URL:https://web.math.unipd.it/deeplearning/event/special-session-learning-representations-for-structured-data-ijcnn-2019/
LOCATION:Intercontinental Budapest Hotel\, Budapest\, Hungary
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20190416
DTEND;VALUE=DATE:20190419
DTSTAMP:20260417T190536
CREATED:20200228T104136Z
LAST-MODIFIED:20200228T110024Z
UID:186-1555372800-1555631999@web.math.unipd.it
SUMMARY:INNS CONFERENCE ON BIG DATA AND DEEP LEARNING 2019
DESCRIPTION:VENUE and ORGANIZATION The 2019 INNS Big Data and Deep Learning (INNSBDDL 2019) conference will be held in Sestri Levante\, Italy\, April 16-18\, 2019.\nThe conference is organized by the International Neural Network Society\, with the aim of representing an international meeting for researchers and other professionals in Big Data\, Deep Learning and related areas. It will feature invited plenary talks by world-renowned speakers in the area\, in addition to regular and special technical sessions with oral and poster presentations. Moreover\, workshops and tutorials will also be featured. \n \nSCOPE We solicit both solid contributions or preliminary results which show the potentiality and the limitations of new ideas\, refinements\, or contaminations in any aspect of Big Data and Deep Learning. Both theoretical and practical results are welcome. \nExample topics of interest includes but is not limited to the following: \n\nBig Data Science and Foundations\n\nNovel Theoretical Models for Big Data\nNew Computational Models for Big Data\nData and Information Quality for Big Data\n\n\nBig Data Mining\n\nSocial Web Mining\nData Acquisition\, Integration\, Cleaning\, and Best Practices\nVisualization Analytics for Big Data\nComputational Modeling and Data Integration\nLarge-scale Recommendation Systems and Social Media Systems\nCloud/Grid/StreamData Mining\nBig Velocity Data\nLink and Graph Mining\nSemantic-based Data Mining and Data Pre-processing\nMobility and Big Data\nMultimedia and Multi-structured Data-Big Variety Data\n\n\nModern Practical Deep Networks\n\nDeep Feedforward Networks\nRegularization for Deep Learning\nOptimization for Training Deep Models\nConvolutional Networks\nSequence Modeling: Recurrent and Recursive Nets\nPractical Methodology\n\n\nDeep Learning Research\n\nLinear Factor Models\nAutoencoders\nRepresentation Learning\nStructured Probabilistic Models for Deep Learning\nMonte Carlo Methods\nConfronting the Partition Function\nApproximate Inference\nDeep Generative Models\n\n\n\nIMPORTANT DATES  \n\nDeadline for submission of tutorial and workshop proposals: August 31\, 2018\nNotification of tutorial and workshop proposals: September 30\, 2018\nDeadline of full paper submission: October 31\, 2018\nNotification of paper acceptance: December 31\, 2018\nCamera-ready submission: January 31\, 2019\nEarly registration deadline: January 15\, 2019\nRegistration deadline: January 31\, 2019\nConference date: April 16-18\, 2019\n\nPROCEEDINGS and SPECIAL ISSUE \nWorks submitted as a regular paper will be published in a serie indexed by Scopus. Submitted papers will be reviewed by some PC members based on technical quality\, relevance\, originality\, significance and clarity. At least one author of an accepted submission should register to present their work at the conference. \nSelected papers presented at INNS BDDL 2019 will be included in special issues of top journals in the field (prospected journals: Big Data Research\, Transaction on Neural Networks and Learning System\, Neurocomputing\, etc).
URL:https://web.math.unipd.it/deeplearning/event/inns-conference-on-big-data-and-deep-learning-2019/
LOCATION:Sestri Levante (GE)\, Italy\, Sestri Levante (GE)\, Italy
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20190401
DTEND;VALUE=DATE:20190402
DTSTAMP:20260417T190536
CREATED:20190717T134840Z
LAST-MODIFIED:20190717T134840Z
UID:142-1554076800-1554163199@web.math.unipd.it
SUMMARY:Past Events
DESCRIPTION:The following events are linked to the old version of the website. \n\nINNS Conference on Big Data and Deep Learning 2019 \nIEEE Symposium on Deep Learning @ SSCI 2018\nSpecial Session on Empowering Deep Learning Models @ WCCI (IJCNN) 2018\nSpecial Session on Deep Learning for Structured and Multimedia Information @ WCCI 2018\nSpecial Session on Interpretable Deep Learning Classifiers @ WCCI 2018\nIEEE Symposium on Deep Learning @ SSCI 2017\nSpecial Session on Deep Learning @ ESANN 2016\nSpecial Session on Deep Learning\, Medical Imaging\, and Translational Medicine @ IJCNN 2016\nSpecial Session on Theoretical Foundations of Deep Learning Models and Algorithms @ IJCNN 2016\nSpecial Session on Deep Learning for Big Multimedia Understanding @ IJCNN 2016\nSpecial Session on Deep Learning for Brain-Like Computing and Pattern Recognition @ IJCNN 2016
URL:https://web.math.unipd.it/deeplearning/event/past-events/
LOCATION:Various Venues
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20181118T080000
DTEND;TZID=UTC:20181121T170000
DTSTAMP:20260417T190536
CREATED:20200228T104552Z
LAST-MODIFIED:20200228T104552Z
UID:190-1542528000-1542819600@web.math.unipd.it
SUMMARY:2018 IEEE Symposium on Deep Learning (IEEE DL'18) @ SSCI 2018.
DESCRIPTION:Description: Deep Learning (DL) is growing in popularity because it solves complex problems in machine learning by exploiting multi scale\, multi-layer architectures making better use of the data patterns. Multi-scale machine perception tasks such as object and speech recognitions using DL have recently outperformed systems that have been under development for many years. The principles of DL\, and its ability to capture multi scale representations\, are very general and the technology can be applied to many other problem domains\, which makes it quite attractive. Many open problems and challenges still exists\, e.g. interpretability\, computational and time costs\, repeatability of the results\, convergence\, ability to learn from a very small amount of data\, to evolve dynamically/continue to learn\, etc. The Symposium will provide a forum for discussing new DL advances\, challenges\, brainstorming new solutions and directions between top scientists\, researchers\, professionals\, practitioners and students with an interest in DL and related areas including applications to autonomous transportation\, communications\, medical\, financial services\, etc. \nThe manuscripts should be submitted in PDF format. Click Here to know further guidelines for submission. \nTopics of IEEE DL’18 include but are not limited to: \n\nUnsupervised\, semi-\, and supervised learning\nDeep reinforcement learning (deep value function estimation\, policy learning and stochastic control)\nMemory Networks and differentiable programming\nImplementation issues (software and hardware)\nDimensionality expansion and sparse modeling\nLearning representations from large-scale data\nMulti-task learning\nLearning from multiple modalities\nWeakly supervised learning\nMetric learning and kernel learning\nHierarchical models\nInterpretable DL\nFuzzy rule-based DL\nNon-Iterative DL\nRecursive DL\nRepeatability of results in DL\nConvergence in DL\nIncremental DL\nEvolving DL\nFast DL\n\nApplications in: \n\n\n\nImage/video\nAudio/speech\nNatural language processing\nRobotics\, navigation\, control\nGames\nCognitive architectures\nAI\n\n\n\nSymposium Web Site
URL:https://web.math.unipd.it/deeplearning/event/2018-ieee-symposium-on-deep-learning-ieee-dl18-ssci-2018/
LOCATION:BENGALURU\, BENGALURU\, India
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20180708
DTEND;VALUE=DATE:20180714
DTSTAMP:20260417T190536
CREATED:20200228T104858Z
LAST-MODIFIED:20200228T104858Z
UID:198-1531008000-1531526399@web.math.unipd.it
SUMMARY:Special Session on Interpretable Deep Learning Classifiers @ WCCI (IJCNN) 2018.
DESCRIPTION:Chairs: Plamen P. Angelov\, Lancaster University\, UK p.angelov@lancaster.ac.uk\nJose C. Principe\, University of Florida\, principe@cnel.ufl.edu . \nSynopsis: Deep Learning is becoming a synonym of highly precise (reaching or surpassing capabilities of a human) computational intelligence technique. Very interesting and important results were reported recently in both scientific literature and also grabbed the imagination of the wider public and industry helping propel the interest towards AI\, neural networks\, machine learning. It was applied mostly to solve classification problems in image processing\, but also for predictive tasks in speech processing and other problems. Despite the undoubted success in achieving high precision and avoiding handcrafting in feature selection a number of issues remain unresolved\, such as: i) transparency and interpretability; ii) the requirement for extremely large training data set\, computational resources and time; iii) overfitting and catastrophic failures with high confidence in some cases; iv) convergence proof for the case of reinforcement learning; v) rigid structure unable to be adapted/to dynamically evolve with new samples and/or new classes; vi) repeatability of the results. Methodologically\, the vast majority of the techniques of this hot and quickly developing area are based exclusively on neural networks (convolutional\, belief based\, etc.). Very recently publications appear where the deep learning (multi-layer) architecture with different levels of abstraction is build based on fuzzy rule-based systems or fuzzy sets are used to represent coefficients/weights in Restricted Bolzman Machines\, etc. The aim of the special session is to address the bottleneck issues listed above and discuss and represent alternative and most recent methods\, techniques and approaches that can help resolve these issues. \nThe specific sub-topics that will be of interest include: \n\nInterpretable/Transparent Deep Learning\nComputational and time complexity/efficiency of Deep Learning Methods\nRepeatability of the results of Deep Learning Methods\nDegree of confidence in the results of Deep Learning\nHighly Parallelisable Deep Learning Methods\nDeep Learning with proven convergence\nRe-trainability and dynamically evolving structures/architectures for Deep Learning\nEnsembles of Deep Learning Classifiers\nFuzzy Deep Rule-based Classifiers\nSelf-adaptive and Self-organising Deep Learning Architectures\n\nAlso applications to: \n\nComputer Vision\nImage Classification\nRobotics\nRemote Sensing\nBiology and Tomography\nSurveillance and Defense\nIndustry 4.0\nAssistive Technologies and Digital Health\n\nImportant dates: \n\nPaper Submission Deadline: 15 January 2018\nPaper Acceptance Notification Date: 15 March 2018\nFinal Paper Submission and Early Registration Deadline: 1st May 2018\nIEEE WCCI 2018: 08-13 July 2018\n\nSubmission Guidelines: Please follow the regular submission guidelines of WCCI 2018. Please notify the chairs of your submission by sending an email to: p.angelov@lancaster.ac.uk or principe@cnel.ufl.edu. \nThis special session is supported by the IEEE Task Force on Deep Learning and by Evolving and Adaptive Fuzzy Systems. \nConference Web Site
URL:https://web.math.unipd.it/deeplearning/event/special-session-on-interpretable-deep-learning-classifiers-wcci-ijcnn-2018/
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20180708
DTEND;VALUE=DATE:20180714
DTSTAMP:20260417T190536
CREATED:20200228T104804Z
LAST-MODIFIED:20200228T104804Z
UID:195-1531008000-1531526399@web.math.unipd.it
SUMMARY:Special Session on Deep Learning for Structured and Multimedia Information @ WCCI (IJCNN) 2018.
DESCRIPTION:Chairs: Davide Bacciu (bacciu@di.unipi.it )\, Silvio Jamil F. Guimarães and Zenilton K. G. Patrocínio Jr. \nhttp://www.icei.pucminas.br/projetos/viplab/ijcnn-deepsm/  \nA key factor triggering the deep learning revolution has been its ground-breaking performance on image and video processing applications. These have been built mostly on a (multi-dimensional) raw data representation of the visual information. Multimedia content\, on the other hand\, calls for more articulated data representations catering for the multimodal nature of this information. These are often based on a structured representation that can capture the complexity of the contextual\, semantic and geometrical relationships among the visual\, phonetic and textual entities and concepts. \nScope and Topics: \nThe goal of this special session is to provide a forum for researchers working on the next generation of deep learning models for machine vision and multimedia information\, which are capable of extracting and processing information in a structured representation and/or with a multimodal nature. We welcome contributions proposing innovative deep models dealing with: \n\nlearning hierarchical or networked representations of multimedia information;\nprocessing of structured multimedia information under the form of sequences\, labelled trees\, as well as more general forms of graphs;\nunderstanding and synthesizing of multimodal data;\nfusion of multimodal information.\n\nThis special session is meant to attract researchers from deep learning\, machine vision and multimedia information communities. We aim to bring together researchers with consolidated tradition on structured data processing (such as in machine learning and NLP) with those with machine vision and multimedia processing insight\, but mostly working with flat-data representations. \nTopics of interest for this special session include\, but are not limited to\, the following: \n\ndeep learning models for structured data;\nrepresentation learning in machine vision and multimedia processing;\nhierarchical/structured visual processing;\ndeep models for visual data streams;\ngenerative and variational deep learning for multimedia data;\nmultimedia data synthesis;\nattentional and bio-inspired models for the processing of visual and audio information;\napplied deep learning to machine vision and multimedia processing\, such as: biomedical images and biobanks\, pose and gesture estimation from graphs\, etc.;\ninnovative software and libraries for deep learning and multimedia content understanding.\n\nImportant dates: \n\nPaper Submission Deadline: 15 January 2018\nPaper Acceptance Notification Date: 15 March 2018\nFinal Paper Submission and Early Registration Deadline: 1st May 2018\nIEEE WCCI 2018: 08-13 July 2018\n\nThis special session is supported by the IEEE Task Force on Deep Learning. \nConference Web Site
URL:https://web.math.unipd.it/deeplearning/event/special-session-on-deep-learning-for-structured-and-multimedia-information-wcci-ijcnn-2018/
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20180708
DTEND;VALUE=DATE:20180714
DTSTAMP:20260417T190536
CREATED:20200228T104716Z
LAST-MODIFIED:20200228T104716Z
UID:193-1531008000-1531526399@web.math.unipd.it
SUMMARY:Special Session on Empowering Deep Learning Models @ WCCI (IJCNN) 2018.
DESCRIPTION:Chairs: Nicolò Navarin nnavarin@math.unipd.it\, Luca Oneto\, Luca Pasa and Alessandro Sperduti. \nDescription: In recent years\, Deep Learning has become the go-to solution for a broad range of applications\, often outperforming state-of-the-art. Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition\, computer vision\, drug discovery\, genomics and many others. \nScope and Topics The goal of this special session is to provide a forum for focused discussions on new extensions of deep learning models and techniques\, and to gain a deeper understanding of the difficulties and limitations associated with state-of-the-art approaches and algorithms. Practitioners should provide practical insights to the theoreticians\, which in turn\, should supply theoretical insights and guarantees\, further strengthening and sharpening practical intuitions and wisdom. \nExamples of these possible extensions are: \n\nMultimodal and Multitask Deep Learning\nDeep Transfer Learning\nDeep Recurrent and Recursive Neural Networks\nDeep Learning on Structured Data\nInterpretability of Deep Learning\nPrivate and Federated Deep Learning\nGenerative and Adversarial Deep Learning\nRandomized Deep Learning (Deep ELM\, Deep ESN\, Deep Reservoir Computing)\n\nThe focus of this special session is to attract both solid contributions or preliminary results which show the potentiality and the limitations of new ideas\, refinements\, or contaminations between the different fields of deep learning and other fields of research in solving real world problems. Both theoretical and practical results (e.g. Social Data Analysis\, Speech\, Natural Language Processing\, Cybersecurity) are welcome to our special session. This special session is supported by the IEEE Task Force on Deep Learning . \nImportant dates: \n\nPaper Submission Deadline: 15 January 2018\nPaper Acceptance Notification Date: 15 March 2018\nFinal Paper Submission and Early Registration Deadline: 1st May 2018\nIEEE WCCI 2018: 08-13 July 2018\n\nConference Web Site
URL:https://web.math.unipd.it/deeplearning/event/special-session-on-empowering-deep-learning-models-wcci-ijcnn-2018/
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20171127
DTEND;VALUE=DATE:20171202
DTSTAMP:20260417T190536
CREATED:20200228T105012Z
LAST-MODIFIED:20200228T105012Z
UID:200-1511740800-1512172799@web.math.unipd.it
SUMMARY:2017 IEEE Symposium on Deep Learning (IEEE DL'17) @ SSCI 2017.
DESCRIPTION:Description: Deep Learning (DL) is growing in popularity because it exploits rather well the unreasonable effectiveness of data to solve complex problems in machine learning. In fact\, multi scale machine perception tasks such as object and speech recognitions using DL have recently outperformed systems that have been under development for many years. The principles of DL\, and its ability to capture multi scale representations\, are very general and the technology can be applied to many other problem domains\, which makes it quite attractive. Sponsored by the IEEE Computational Intelligence Society\, this event will attract top scientists\, researchers\, professionals\, practitioners and students from around the world. \nThe goal of the IEEE Symposium on DL is to provide a forum for interactions between researchers and practitioners in DL as well as in Artificial Neural Networks\, Bayesian Learning\, Generative and Predictive Modeling\, Optimization\, Cognitive Architectures and Machine Learning with an interest in DL. We are interested in discussing the new DL advances\, the challenges ahead\, and to brainstorm about new solutions and directions. We also seek applications from large engineering firms dedicated to construction and services in energy\, autonomous transportation\, communications industries\, web\, marketing\, medical and financial services\, and scientific fields that require big data analytics. \nTopics of IEEE DL’17 include but are not limited to: \n\nUnsupervised\, semi-supervised\, and supervised learning\nDeep reinforcement learning (deep value function estimation\, policy learning and stochastic control)\nMemory Networks and differentiable programming\nImplementation issues\, both software and hardware platforms\nApplications in vision\, audio\, speech\, natural language processing\, robotics\, navigation\, control\, games AI\, cognitive architectures\, etc.\nDimensionality expansion and sparse modeling\nLearning representations from large-scale data\nMulti-task learning\nLearning from multiple modalities\nWeakly supervised learning\nMetric learning and kernel learning\nHierarchical models\nParalleliisation in DL\nNon-Iterative DL\nRecursive DL\nIncremental DL\nEvolving DL\nFast DL\n\nSymposium Web Site
URL:https://web.math.unipd.it/deeplearning/event/2017-ieee-symposium-on-deep-learning-ieee-dl17-ssci-2017/
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20160724
DTEND;VALUE=DATE:20160730
DTSTAMP:20260417T190536
CREATED:20200228T105509Z
LAST-MODIFIED:20200228T105509Z
UID:210-1469318400-1469836799@web.math.unipd.it
SUMMARY:Special Session on Deep Learning for Brain-Like Computing and Pattern Recognition @ IJCNN 2016.
DESCRIPTION:Description: CDeep learning is a topic of broad interest\, both to researchers who develop new deep architectures and learning algorithms\, as well as to practitioners who apply deep learning models to a wide range of applications\, from image classification to video tracking\, etc. Brain-like computing combines computational techniques with cognitive ideas\, principles and models inspired by the brain for building information systems used in humans’ common life. Pattern recognition is a conventional area of artificial intelligence\, which focuses on the recognition of patterns and regularities in data. \nRecently\, there has been very rapid and impressive progress in these three areas\, in terms of both theories and applications\, but many challenges remain. This workshop aims at bringing together researchers in machine learning and related areas to discuss the utility of deep learning for brain-like computing and pattern recognition\, the advances\, the challenges we face\, and to brainstorm about new solutions and directions. \nTopics of interest to the special session include\, but are not limited to: \n\nunsupervised\, semisupervised\, and supervised deep learning;\nactive learning\, transfer learning and multi-task learning;\ndimensionality reduction\, metric learning and kernel learning;\nsparse modeling;\nensemble learning;\nhierarchical architectures;\nDoptimization for deep models;\nintelligent data analysis and recommendation systems;\nimplementation issues\, parallelization\, software platforms\, hardware for deep learning and big data analysis\napplications in video\, image\, texture\, text processing\, neuroscience\, medical imaging or any other field.\n\nSpecial Session Web Site
URL:https://web.math.unipd.it/deeplearning/event/special-session-on-deep-learning-for-brain-like-computing-and-pattern-recognition-ijcnn-2016/
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20160724
DTEND;VALUE=DATE:20160730
DTSTAMP:20260417T190536
CREATED:20200228T105429Z
LAST-MODIFIED:20200228T105429Z
UID:208-1469318400-1469836799@web.math.unipd.it
SUMMARY:Special Session on Deep Learning for Big Multimedia Understanding @ IJCNN 2016.
DESCRIPTION:Description: Conventional multimedia understanding is usually built on top of handcrafted features\, which are often much restrictive in capturing complex multimedia content. Recent progress on deep learning opens an exciting new era\, placing multimedia understanding on a more rigorous foundation with automatically learned representations to model the multimodal data and the cross-media interactions. Existing studies have revealed promising results that have greatly advanced the state-of-the-art performance in a series of multimedia research areas\, from the multimedia content analysis\, to modeling the interactions between multimodal data\, to multimedia content recommendation systems\, to name a few here. \nThis special session aims to provide a forum for the presentation of recent advancements in deep learning research that directly concerns the multimedia community. For multimedia research\, it is especially important to develop deep learning methods to capture the dependencies between different genres of data\, building joint deep representation for diverse modalities. \nTopics of interest to the special session include\, but are not limited to: \n\nNovel deep network architectures for multimodal data representation;\nDeep learning for new multimedia applications;\nEfficient training and inference methods for multimedia deep networks;\nEmerging applications of deep learning in multimedia search\, retrieval and management;\nDeep learning for multimedia content analysis and recommendation;\nDeep learning for cross-media analysis\, knowledge transfer and information sharing;\nDistributed computing\, GPUs and new hardware for deep learning in multimedia research;\nOther deep learning topics for multimedia computing\, involving at least two modalities.\n\nDr. Jinhui Tang\, Nanjing University of Science and Technology\, China. Dr. Zechao Li\, Nanjing University of Science and Technology\, China\nSpecial Session Web Site
URL:https://web.math.unipd.it/deeplearning/event/special-session-on-deep-learning-for-big-multimedia-understanding-ijcnn-2016/
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20160724
DTEND;VALUE=DATE:20160730
DTSTAMP:20260417T190536
CREATED:20200228T105338Z
LAST-MODIFIED:20200228T105338Z
UID:206-1469318400-1469836799@web.math.unipd.it
SUMMARY:Special Session on Theoretical Foundations of Deep Learning Models and Algorithms @ IJCNN 2016.
DESCRIPTION:Description: Deep learning models and techniques are becoming more and more the computational tool of choice when facing difficult applicative problems\, such as speech and image understanding. The reason for this huge interest in deep learning is due to the fact that their adoption leads to human (and\, in some cases\, super-human) performances. These successes\, however\, have been mainly obtained on empirical basis\, often thanks to the computational power provided by parallel computer facilities such as GPUs or CPU clusters. \nAlthough some recent works have addressed deep learning from a theoretical perspective\, still there is a limited understanding of why deep architectures work so well and on how to design computationally efficient and effective training This special session aims to gather together leading scientists in deep learning and related areas within computational intelligence\, neuroscience\, machine learning\, artificial intelligence\, mathematics\, and statistics\, interested in all aspects of deep architectures and deep learning\, with a particular emphasis on understanding fundamental principles. \nTopics of interest to the special session include\, but are not limited to: \n\nTheoretical results on representation and learning in natural or artificial deep architectures;\nTheoretical and/or empirical analysis of specific natural or artificial deep architectures or algorithms;\nInnovative deep architectures/algorithms for data representation and analysis\, including both supervised methods like deep convolution networks and unsupervised ones like stacked auto-encoders and deep Boltzmann machines;\nDesign and/or analysis of recurrent and recursive architectures for processing of sequences and more general data structures;\nApplications of deep learning in data representation and analysis\, including recognition\, understanding\, detection\, segmentation\, retrieval\, restoration\, super-resolution\, and compression;\nDeep learning algorithms that efficiently handle large-scale data.\n\nAlessandro Sperduti\, Univ. Padova (Italy)\, Jose C. Principe\, University of Florida (USA)\, Plamen Angelov\, Lancaster University (UK).\nSpecial Session Web Site
URL:https://web.math.unipd.it/deeplearning/event/special-session-on-theoretical-foundations-of-deep-learning-models-and-algorithms-ijcnn-2016/
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20160724
DTEND;VALUE=DATE:20160730
DTSTAMP:20260417T190536
CREATED:20200228T105249Z
LAST-MODIFIED:20200228T105249Z
UID:204-1469318400-1469836799@web.math.unipd.it
SUMMARY:Special Session on Deep Learning\, Medical Imaging\, and Translational Medicine @ IJCNN 2016.
DESCRIPTION:Description: Deep learning has demonstrated its capability for many vision problems\, such as face detection and recognition\, image classification\, etc. It is expected that this technique can benefit the area of medical image analysis\, as well as imaging-based translational medicine. Though a few pioneering works can be found in the literature\, there are still a lot of unresolved issues when applying deep learning for medical images. \nThe goal of special session is to present works that focus on the design and use of deep learning in medical image analysis as well as imaging-based translational medical studies. This special session is going to set the trends and identify the challenges of the use of deep learning methods in the field of medical image. Meanwhile\, it is expected to increase the connection between software developers\, specialist researchers and applied end-users from diverse fields. \nTopics of interest to the special session include\, but are not limited to: \n\nImage descriptor and feature extraction;\nImage super-resolution;\nImage reconstruction;\nImage registration;\nImage segmentation and labeling;\nComputer-assisted lesion detection;\nComputer-assisted diagnosis;\nDeep learning model selection;\nMeta-heuristic techniques for fine-tuning parameter in deep learning-based architectures;\nOther related translational medical applications.\n\nOrganized by Qian Wang\, Jun Shi\, Shihui Ying\, Manhua Liu and Yonghong Shi\nSpecial Session Web Site
URL:https://web.math.unipd.it/deeplearning/event/special-session-on-deep-learning-medical-imaging-and-translational-medicine-ijcnn-2016/
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20160427
DTEND;VALUE=DATE:20160430
DTSTAMP:20260417T190536
CREATED:20200228T105144Z
LAST-MODIFIED:20200228T105144Z
UID:202-1461715200-1461974399@web.math.unipd.it
SUMMARY:Special Session on Deep Learning @ ESANN 2016.
DESCRIPTION:Description: Deep learning models and techniques are nowadays the leading approaches to face complex machine learning and pattern recognition problems\, especially when considering perceptual tasks such as speech and image understanding. The adoption of deep architectures\, comprising multiple\, adaptable\, processing layers\, has recently allowed significant improvements in performance for these type of tasks. Both unsupervised and supervised approaches for training deep architectures have been empirically explored\, also thanks to the adoption of parallel computer facilities such as GPUs or CPU clusters. Despite of that\, there is a limited understanding of why deep architectures work so well and on how to design computationally efficient and effective training algorithms. A natural source of inspiration for a better understanding of these issues is the study of human brain\, where deep structures are now well recognized and pervasive (e.g. human visual recognition requires the activation of a hierarchy of processing stages and pathways.) \nThis special session focuses on all aspects of deep architectures and deep learning\, with a particular emphasis on understanding fundamental principles. Because of that\, it aims to bring together leading scientists in deep learning and related areas within neuroscience\, machine learning\, artificial intelligence\, mathematics\, and statistics. \nTopics of interest to the special session include\, but are not limited to: \n\nTheoretical results on representation and learning in natural or artificial deep architectures;\nTheoretical and/or empirical analysis of specific natural or artificial deep architectures or algorithms;\nInnovative deep architectures/algorithms for data representation and analysis\, including both supervised methods like deep convolution networks and unsupervised ones like stacked auto-encoders and deep Boltzmann machines;\nDesign and/or analysis of recurrent and recursive architectures for processing of sequences and more general data structures;\nApplications of deep learning in data representation and analysis\, including recognition\, understanding\, detection\, segmentation\, retrieval\, restoration\, super-resolution\, and compression;\nDeep learning algorithms that efficiently handle large-scale data;\nDeep learning software and hardware architectures for applications.\n\nSpecial Session on Deep Learning @ ESANN 2016
URL:https://web.math.unipd.it/deeplearning/event/special-session-on-deep-learning-esann-2016/
END:VEVENT
END:VCALENDAR