identifying section headings

ASTROINFORMATICS-2023 CONFERENCE

1-6 OCTOBER 2023 – INAF NATIONAL AUDITORIUM Observatory of Capodimonte, Naples, Italy

identifying section headings

00

days

00

hours

00

minutes

00

seconds

identifying section headings

Important Warning

We have been informed that some scammers are contacting invited speakers and soc members on behalf of the organization asking details about accomodations and proposing reservations. Do not answer to those emails, we will never inquire about this.

ABOUT THE CONFERENCE

DIVIDER

Astroinformatics 2023 aspires to continue the successful series of meetings over the last decade have attracted researchers engaged in the processing of astronomical data using modern computational methods. The scientific exchange between the astronomical and computational worlds is, as always, the main focus of the event. This year, the workshop will focus on the new scenarios opened by the emerging deep learning and AI methodologies. During the 5 days meeting, specific sessions will be devoted to:

  • Data challenges from ongoing and future projects
  • Generative AI and Explainable AI in Astrophysics
  • Novel AI applications
  • The evolving computing landscape (HPC and quantum)
  • AI-assisted discovery of analytical relations in the data
  • Methodological transfer.

The meeting is structured around invited talks with some space for contributed talks and ample space for posters. Panel discussions on selected topics will be organised at the end of each session. The meeting proceedings will be published in a special issue of Frontiers.

identifying section headings

Further registrations are possible only on-site

identifying section headings

THE CONFERENCE SCHEDULE

DIVIDER

October 1

18:30 – 20:00 Welcome cocktail and pre-registration

October 2

9:15 – 13:30 Morning Session
14:30 – 18:00 Afternoon Session

October 3

9:30 – 13:00 Morning Session

 

14:00 – 17:50 Afternoon Session

October 4

9:30 – 13:30 Morning Session

 

14:30 -15:10 Afternoon Session/Social Dinner

October 5

9:30 – 13:10 Morning Session

 

14:00 – 18:30 Afternoon Session

October 6

9:30 – 13:30 Morning Session

identifying section headings

6

Days

20

Speakers

2

Workshops

30

Hours

identifying section headings

Keynote speaker

DIVIDER

Yann LeCun
Yann LeCun
Chief AI scientist at Facebook & Silver Professor at the Courant Institute, New York University

Yann is a Turing Award winning French computer scientist working primarily in the fields of machine learningcomputer visionmobile robotics and computational neuroscience. He is the Silver Professor of the Courant Institute of Mathematical Sciences at New York University and Vice-President, Chief AI Scientist at Meta.

Invited Speakers

DIVIDER

Giovanni Acampora
Giovanni Acampora
University of Naples Federico II

Quantum Computational Intelligence

Abstract: The world of computing is going to shift towards new paradigms able to provide better performance in solving hard problems than classical computation. In this scenario, quantum computing is assuming a key role thanks to the recent technological enhancements achieved by several big companies in developing computational devices based on fundamental principles of quantum mechanics: superposition, entanglement and interference. These computers will be able to yield performance never seen before in several application domains, and the area of artificial intelligence may be the one most affected by this revolution. Indeed, on the one hand, the intrinsic parallelism provided by quantum computers could support the design of efficient algorithms for artificial intelligence such as, for example, the training algorithms of machine learning models, and bio-inspired optimization algorithms; on the other hand, artificial intelligence techniques could be used to reduce the effect of quantum decoherence in quantum computing, and make this type of computation more reliable. This talk aims at introducing the auditors with this new research area and pave the way towards the design of innovative computing infrastructure where both quantum computing and artificial intelligence take a key role in overcoming the performance of conventional approaches.

Short Bio: Giovanni Acampora: Professor of Quantum Machine learning at the University Federico II and coordinator of the Ph.D. program in Computational Intelligence. He introduced the Fuzzy Markup Language (FML)which became the first IEEE standard  (IEEE-1855) in Fuzzy logic and gained him the IEEE-SA Emerging Technology Award. He is Editor-in-Chief of the Journal Quantum Machine Intelligence and in 2019,  he received the Canada-Italy Innovation Award.

Nikos Gianniotis
Nikos Gianniotis
Heidelberg Institute for Theoretical Studies

Probabilistic Cross Correlations for Density Estimation

Probabilistic Cross-Correlation for Delay Estimation The Interpolated Cross-Correlation Function (ICCF) has been the workhorse of astronomers when estimating the delay between pairs of lightcurves originating from AGN. We present a probabilistic reformulation of ICCF that enjoys several benefits such as accounting for measurement error, out-of-sample predictions, and most importantly, the capability of delivering a posterior distribution of the delay. Our reformulation views the observed lightcurves in each band as the manifestation of a common latent signal that we model as a sample from a Gaussian process. We demonstrate the advantages of the probabilistic cross-correlation arising from its probabilistic grounding on a number of AGN objects.

Short Bio: Tba

Fabrizia Guglielmetti
Fabrizia Guglielmetti
European Southern Observatory - ESO

A BRAIN study to tackle image analysis with artificial intelligence in the ALMA2030 era

Abstract: An ESO internal ALMA development study, BRAIN is addressing the ill-posed inverse problem of image analysis employing astrostatistics and astroinformatics. These emerging fields of research offer interdisciplinary approaches at the intersection of observational astronomy, statistics, algorithm development, and data science. In this study, we provide evidence of the benefits in employing these approaches to ALMA image analysis for operational and scientific purposes. We show the potentials of two techniques (RESOLVE and DeepFocus), applied to ALMA calibrated science visibilities. Significant advantages are provided with the potential to improve the quality and completeness of the data products and overall processing time. Both approaches evidence the logical pathway to address the incoming revolution in data analysis dictated by ALMA2030. Moreover, we bring to the community additional products through a new package (ALMASim) to promote advancements in these fields, providing a refined ALMA simulator usable by a large community for training and/or testing new algorithms.

Short Bio: As a Scientist at the ALMA Regional Centre at the European Southern Observatory (ESO HQ, Garching), she serves as the European Data Reduction Manager and leads the principal investigator role for the ESO internal ALMA development study, BRAIN, aimed at enhancing CASA imaging procedures. Her responsibilities encompass quality assurance, pipeline testing, and subsystem development to enhance the operational workflow utilized by several data reducers within the ALMA partnership. Fabrizia’s interests are centered around astrostatistics and astroinformatics, multiwavelength research, along with creating astrometric catalogues. She actively contributes to popularizing Bayesian techniques (see ‘Bayes Forum’ in Garching research campus) which offer innovative solutions to tackle the challenges posed by big data in astronomy. Through the utilization of advanced algorithms that optimize decisions based on well-informed assumptions, she addresses these challenges with refined machine learning techniques. Her focus is to introduce these innovative techniques in the current ALMA data reduction workflow.

Ashish Mahabal
Ashish Mahabal
California Institute of Technology - Caltech

Anomaly detection: overview and application to ZTF

Abstract: Optical astronomy has become increasingly data rich with billions of sources and hundreds of observations per source. Many methods are being planned to use data-driven ways to classify objects, be they in the Solar System, or variable stars in our Galaxy, or extra-galactic objects. While current surveys and experience over a few decades has prepared us for well understood classes, data covering wider parameter spaces are likely to harbor classes and phenomena not encountered before. There would be in-class extreme outliers, and totally new phenomena. Identifying all kinds of anomalies is crucial to advance our understanding of the cosmos. The most exciting anomalies would be those that reveal our biases and selection effects, and lead us to entire populations unknown so far. We are also prepared for finding a lot of artifacts in the process. A bit like bugs, these could reveal shortcomings in the processing pipelines, and an opportunity for improvement. We will provide a broad summary of anomaly detection and present our work on finding anomalies in ZTF data starting from light curve features and methods like HDBSCAN and isolation forest to look for outliers. We will finish by showing how this connects with the bigger picture, and can be generalized for other surveys.

Short Bio: Ashish Mahabal is an astronomer (Division of Physics, Mathematics, and Astronomy) and Lead Computational and Data Scientist (Center for Data Driven Discovery) at the California Institute of Technology. His interests include Large Sky Surveys, Classification, Deep Learning, and Methodology Transfer to other complex-data fields like medicine. He leads the ML for the Zwicky Transient Facility, a new large survey covering the entire Northern Sky every few nights. He also works with the Data Science group at the Jet Propulsion Laboratory and is part of the Early Detection Research Network (EDRN) for cancer, and MCL.

Kai Polsterer
Kai Polsterer
Heidelberg Institute for Theoretical Studies

From Photometric Redshifts to Improved Weather Forecasts: an interdisciplinary view on machine learning in astronomy

Abstract: The amount, size, and complexity of astronomical data-sets is growing rapidly in the last decades. Now, with new technologies and dedicated survey telescopes, the databases are even growing faster. Besides dealing with poly-structed and complex data, sparse data has become a field of growing scientific interest. By applying technologies from the fields of computer sciences, mathematics, and statistics, astronomical data can be accessed and analyzed more efficiently.
A specific field of research in Astroinformatics is the estimation of the redshift of extra-galactic sources, a measure of their distance, by just using sparse photometric observations. Observing the full spectroscopic information that would be necessary to directly measure the redshift, would be too time-consuming. Therefore, building accurate statistical models is a mandatory step, especially when it comes to reflecting the uncertainty of the estimates. Statistics and especially weather forecasting has introduced and utilized proper scoring rules and especially the continuous ranked probability score to characterize the calibration as well as the sharpness of predicted probability density functions.
This talk presents what we achieved when using proper scoring rules to train deep neural networks and to evaluate the model estimates. We present how this work led from well calibrated redshift estimates to an improvement in statistical post-processing of weather forecast simulations. The presented work is an example of interdisciplinarity in data-science and how methods can bridge between different fields of application.

Short Bio: Kai Polsterer is the Astroinformatics Group leader and Deputy Scientific Director at the Heidelberg Institute for Theoretical Studies operated by the Klaus Tshira Foundation. He is one of the European leaders in the field of astroinformatics. His contributions span a wide range of fields from astrophysics, to weather forecast, to medicine.  

Regina Sarmiento
Regina Sarmiento
Instituto de Astrofísica de Canarias

Contrastive learning applied to astrophysics

Abstract: Reliable tools to extract patterns from high-dimensionality spaces are becoming more necessary as astronomical datasets increase both in volume and complexity. Contrastive Learning is a self-supervised machine learning algorithm that extracts informative measurements from multi-dimensional datasets, which has become increasingly popular in the computer vision and Machine Learning communities in recent years. To do so, it maximizes the agreement between the information extracted from augmented versions of the same input data, making the final representation invariant to the applied transformations. Contrastive Learning is particularly useful in astronomy for removing known instrumental effects and for performing supervised classifications and regressions with a limited amount of available labels. I will summarize the main concepts behind contrastive learning and review the first promising applications to astronomy. I also include some practical recommendations on which applications are particularly attractive for contrastive learning and present some examples on integral field spectroscopic data.

Short Bio: Born in Argentina, she is finishing her PhD on Galaxy Evolution and Deep Learning at the Institute for Astrophysics of the Canaries (IAC, Spain). Her work combines integral field spectroscopic (IFS) data and cosmological simulations with Deep Learning techniques to study the galaxy assembly history.

Michael Wilkinson
Michael Wilkinson
University of Groningen - RUG

Connected Morphological Operators in Astronomy: Tools for Object Detection and Pattern Analysis on Vast Data.

Abstract: Connected morphological filters have seen rapid development theory, algorithms, and applications in many fields of computer vision. More recently, they have found use in astronomical applications, most notably in object detection and pattern analysis. Because these operators work on connected regions in the image in a data-driven way, rather than applying some arbitrary kernel to all pixels equally, parallel algorithms were initially not available. In the past decade, this situation has been amended, and parallel and distributed algorithms capable of handling giga- and tera-scale data sets are available. In this talk I will give an overview of existing tools and algorithms, and new developments in multi-band object detection, dealing with huge data cubes from e.g. LOFAR, exploratory data analysis, and combining these methods with machine-learning tools.

Short Bio: Michael Wilkinson obtained an MSc in astronomy from the Kapteyn Astronomical Institute, University of Groningen in 1993, after which he worked on image analysis of intestinal bacteria at the Department of  Medical Microbiology, University of Groningen, obtaining a PhD at the Institute of Mathematics and Computing Science, also in Groningen, in 1995. He was appointed as researcher at the Centre for High Performance Computing in Groningen working on simulating the intestinal microbial ecosystem on parallel computers. After this he worked as a researcher at the Johann Bernoulli Institute for Mathematics and Computer Science on image analysis of diatoms. He is currently associate professor at the Bernoulli Institute for Mathematics, Computer Science and Artificial Intelligence, working on morphological image analysis and especially connected morphology and models of perceptual grouping. An important research focus is on handling giga- and tera-scale images in remote sensing, astronomy and other digital imaging modalities.

Guillermo Cabrera
Guillermo Cabrera
University of Concepción

Unleashing the Power of Transformers for the Analysis of Cosmic Streams

Transformers are deep learning architectures that have shown to reach state-of-the-art performances across various domains. They originally gained attention thanks to their performance in natural language processing tasks. More recently, they have been successfully applied to tasks involving images, tabular data, and time series, among others. In this talk we will review the recent advancements in transformers when applied to the characterization of astronomical light-curves, from task-specific models to foundation models. Transformers have become the new state-of-the-art and will play a key role in analysing cosmic streams in real time from current and next-generation time-domain instruments such as the Vera C. Rubin Observatory and its Legacy Survey of Space and Time (LSST).

Short Bio: Associate Professor at the Department of Computer Science, University of Concepción. His research interests include machine learning, computer vision, data-science, astroinformatics, and bioinformatics. His work is focused on developing new algorithms for massive observational data from a wide variety of astronomical instruments such as ALMA, SDSS, HST, and the LSST among others. he is a founding member of the Astroinformatics Laboratory at the Center for Mathematical Modeling, and is currently a member of the Millenium Institute of Astrophysics.

Pablo Gomez
Pablo Gomez
European Space Agency - ESA

Solving Large-scale Data Challenges with ESA Datalabs

Current and upcoming space science missions will produce petascale data in the coming years. This requires a rethinking of data distribution and processing practices. For example, the Euclid mission will be sending more than 100GB of compressed data to Earth every day. Analysis and processing of data on this scale requires specialized infrastructure and toolchains. Further, providing users with this data locally is not practical due to bandwidth and storage constraints. Thus, a paradigm shift of bringing users’ code to the data and providing a computational infrastructure and toolchain around the data is required. The ESA Datalabs platforms is specifically focused on fulfilling this need. It provides a centralized platform with access to data from various missions including the James Webb Space Telescope, Gaia, and others. Pre-setup environments with the necessary toolchains and standard software tools such as JupyterLab are provided and enable data access with minimal overhead. And, with the built-in Science Application Store (SCIAPPS), a streamlined environment is given that allows rapid deployment of desired processing or science exploitation pipelines.  In this manner, ESA Datalabs provides an accessible and potent framework for high-performance computing and machine learning applications. While users may upload data, there is no need to for downloading data, thus mitigating the bandwidth burden. As the computational load is handled within the computational infrastructure of ESA Datalabs, high scalability is achieved, and resources can be requisitioned as needed. Finally, the platform-centric approach facilitates direct collaboration on code and data. Currently, the platform is already available to several hundred users, regularly showcased in dedicated workshops and interested users may request access online.

Short Bio: Tba

Johan H. Knapen
Johan H. Knapen
Instituto de Astrofísica de Canarias - IAC

Training on the interface of astronomy and computer science

Abstract:Modern developments in fields like astronomy imply the need for advanced computational approaches to handle large data sets. However, most recent and current PhDs in astronomy have received little or no formal training in important areas including computer science, computational methods, software and algorithm design, or project management.
We will report on a number of scientific advances on the interface of extragalactic astronomy and computer science that have resulted from our EU-funded Initital Training Network SUNDIAL, in which 15 PhD candidates were trained. We will then discuss how these results will be built upon and expanded in a newly approved Marie-Sklodowska Curie Doctoral Network called EDUCADO (Exploring the Deep Universe by Computational Analysis of Data from Observations). We will train 10 Doctoral Candidates in the development of a variety of high-quality methods needed to address the formation of the faintest observed structures in the nearby Universe, including novel object detection algorithms and object recognition and parameter distribution analyses of complex datasets. We aim to detect unprecedented numbers of the faintest observable galaxies from new large-area surveys. We will study the morphology, populations, and distribution of large samples of various classes of dwarf galaxies, compare dwarf galaxy populations and properties across different environments, and confront the results with cosmological models of galaxy formation and evolution. Finally, we will perform detailed simulations and observations of the Milky Way and the Local Group to compare with dwarf galaxies in other environments. We aim for our interdisciplinary training tools, methods and materials to become publicly available.

Short Bio: tba

Nicola Napolitano
Nicola Napolitano
University of Naples Federico II

Machine Learning the Universe with upcoming Large Sky Surveys

Abstract: The emerging “tensions” in the measurement of some critical cosmological parameters and discrepancies between observations in galaxies and cosmological simulations suggest that the “consensus” cosmological model, founded on the existence of two dark components, the Dark Matter (DM) and the Dark Energy (DE), can be either incomplete or incorrect.
The solutions to these “cracks” can reside in the understanding of the connection between the dark components, especially DM, and the baryons, i.e. the regular matter (stars and gas) that constitutes the visible part of galaxies. However, a more complex set of cosmological parameters, or some missing physics, including the nature of dark matter, e.g. some form of warm or self-interacting DM, cannot be excluded. I introduce a project aimed at exploiting data from the major next-generation imaging and spectroscopic surveys, using Machine Learning (ML) tools to 1) fast and accurately extract parameters of billion galaxy samples, and 2) use these huge datasets to constrain cosmology and galaxy formation models. For this last objective, I will present a preliminary application to galaxy cluster simulation data.

Short Bio: professor of general astronomy and galaxy physics at the School of Physics and Astronomy of Sun Yat-sen University (China). His research interests are: galaxy dynamics and dark matter, extragalactic astronomy, cosmological simulations, large sky surveys and machine learning tools applied to astrophysics.. He is a member of the Chinese Space Station Telescope (CSST) Science Center for Guangdong-Hong Kong-Macau Great Bay Area, where he also leads a group working on machine learning applications to galaxy science and dark matter, including galaxy structural parameters, photometric redshifts, search for strong gravitational lenses in images and spectra and dark matter modeling. He is PI of the Fornax VLT Spectroscopic Survey (FVSS) and core team member of the KIDS survey

Pavlos Protopapas
Pavlos Protopapas
Harvard University

Physics Informed Neural Networks (PINNS); Solving Differential Equations Using Neural Networks.

Abstract: In this talk, I will review how we can use neural networks to solve differential equations for initial conditions and boundary value problems. There are many exciting directions in this line of work, but two main criticisms are computational complexity compared to traditional methods and the lack of error bounds. I will present transfer learning methods that can be used to speed up computation and new work on estimating the error bounds of the neural network approach in solving (partial) differential equations.

Short Bio: tba

Pranav Sharma
Pranav Sharma
United Nations International Computing Centre - UNICC

Navigating Global Policy and Diplomacy through Astroinformatics: Insights from Science20

Abstract: In the ever-evolving nexus of global policy and technical innovation, the burgeoning field of astroinformatics emerges as a beacon of collaborative potential. As we stand at the intersection of scientific discovery, policy formulation, and international diplomacy, the Science20 (S20) platform serves as a bridge uniting technical expertise from around the world with policy discours. Astroinformatics, at its core, encapsulates the convergence of astronomy, data science, and cutting-edge technologies. Within this context, the recent Science20 engagement titled “Astroinformatics for Sustainable Development” stood as a pivotal milestone. This symposium, held virtually on July 6th and 7th, 2023, delved into the global panorama of astroinformatics and its profound impact on policy and diplomacy. In this talk, I will present the policy discourse that emerged from the Science20 platform highlighting the various views and translatable policy and science diplomacy frameworks.

Short Bio: Pranav Sharma is an astronomer and science historian known for his work on the history of the Indian Space Program. He has curated Space Museum at the B. M. Birla Science Centre (Hyderabad, India) and led several exhibitions on Indian Space History in collaboration with ISRO, CNES, ESA, and European Union Institute. He was the in-charge of the history of the Indo-French scientific partnership project supported by the Embassy of France in India. He is the Co-Curator of the History of Data-Driven Astronomy Project. He also serves as the Policy and Diplomacy Advisor to United Nations International Computation Centre, Advisor to the France India Foundation, Scientific Advisor to Arc Ventures, and Member Secretary (Policy, Transdisciplinary Disruptive Science, and Communications) for G20-Science20.  He has co-authored the book Essential Astrophysics: Interstellar Medium to Stellar Remnants, CRC Press, 2019.

Ashley Villar
Ashley Villar
Harvard University

Rapid Inference for Extragalactic Transients  in the Era of LSST

Soon, the Legacy Survey of Space and Time (LSST) will commence and drive the discovery rate of extragalactic transients to millions per year. Much work has been done to identify these transients, to classify transients in realtime, and to aid researchers in deciding how to optimize observational resources to characterize these events. Here, I will discuss recent developments in inference (i.e., extracting astrophysical information) for such events. I will focus on the use of emulators to approximate complex physical simulations, and simulation-based inference techniques to accelerate Bayesian inference.

Short Bio: Ashley Villar is Assistant Professor of Astronomy at Harvard University. Previously, she was the inaugural Mercedes Richards Career Development Assistant Professor at the Pennsylvania State University. Her main interests are in using data-driven methods and machine learning to study the eruptions, mergers and explosions of stars. She is especially interested in utilizing multiband light curves to understand the underlying physics of optical transients. 

Adriano Fontana
Adriano Fontana
National Institute for Astrophysics - INAF

From JWST to Euclid: new algorithms  for the study of the first galaxies

Abstract: Just-launched space missions like JWST and Euclid have just been launched and are already revolutionising our understanding of many physical phenomena in the local and the distant Universe.

In this talk I will concentrate on the field where the synergy between these two missions is the strongest: the search and study of the evolution of galaxies, starting from the first that appeared shortly after the Big Bang.

After a short introduction of the current status of this research field, I will discuss the challenges in the interpretation and analysis of the data that are obtained with these new facilities.

The large amount of data demand automated procedures for their analysis, and the subtle systematics that may easily affect or bias the results require the development of AI or other advanced techniques to be avoided.

I will briefly describe a few test cases where the adoption of advanced algorithms may improve the reliability and accuracy of the analysis.

Short Bio: Research director at INAF, president of theLBT corporation and a world authority in the field of high redshift galaxy evolution. He had leading responsabilities in most current and future survey projects such as K20, GOODS, CANDELS, VUDS VANDELS, Frontier Fields, EUCLID, etc. Since a few years his group has been proposing innovative AI based approaches to the study of the high redshift universe. He currently leads the 70 Meuro STILES project which will drive italian astronomy in the next generation of observing and computing facilities.

Matthew Graham
Matthew Graham
California Institute of Technology - Caltech

Developments in Fast Machine Learning for Science 

For over two decades, we’ve heard about the challenges that the forever imminent data tsunami is going to bring to astronomy and yet we still spend most of our computing lives firmly in the land of CPUs with occasional forays into the GPU realm. There are, however, now genuine needs across the entire science spectrum for high throughput low latency inferencing driven by real-time responses that our current systems cannot deliver. Novel options in both hardware components and algorithmic methodologies are required to deliver production-ready solutions that can cope. In this talk, I will review current developments in the field of fast machine learning and, in particular, those that are relevant to multimessenger astronomy which is at the forefront of this work in our domain.

Short Bio: He is Research Professor of Astronomy at the California Institute of Technology and the Project Scientist for the Zwicky Transient Facility (ZTF), the first of a next generation of time-domain sky surveys producing hundreds of thousands of public transient alerts per night. Previously he has worked on the Catalina Real-time Transient Survey (CRTS), a still unmatched data set in terms of temporal baseline coverage; the NOAO DataLab; the Virtual Observatory; and the Palomar-Quest Digital Sky Survey. He is member of the NSF-funded HDR Institute for Accelerated AI Algorithms for Data-Driven Discovery (A3D3) which aims to provide real-time AI at scale in high energy physics, multimessenger astronomy, and neuroscience. In this field, his particular interest is in low-latency inferencing from astronomical alert streams with commodity hardware accelerators and using reinforcement learning to optimize followup strategies. His main research interests are the application of machine learning and advanced statistical methodologies to astrophysical problems, particularly the variability of quasars and other stochastic time series.

Michelle Lochner
Michelle Lochner
University of the Western Cape

Enabling New Discoveries with Machine Learning

Abstract: The next generation of telescopes such as the SKA and the Vera C. Rubin Observatory will produce enormous data sets, far too large for traditional analysis techniques. Machine learning has proven invaluable in handling massive data volumes and automating many tasks traditionally done by human scientists. In this talk, I will explore the use of machine learning for automating the discovery and follow-up of interesting astronomical phenomena. I will share an exciting recent MeerKAT discovery made with machine learning and discuss how the human-machine interface will play a critical role in maximising scientific discovery with automated tools.

Short Bio: Senior Lecturer with a joint position between the University of the Western Cape and the South African Radio Astronomy Observatory (formerly SKA South Africa). Her focus is on cosmology and trying to get the best out of combining optical and radio telescopes like the Vera C. Rubin Observatory, in Chile, as well as the Square Kilometre Array and its precursor, MeerKAT, in South Africa. She works on developing new statistical techniques and machine learning techniques with special focus on anomaly detection for scientific discovery. She is also the founder and director of an international mentoring programme for women and gender minorities in physics called the Supernova Foundation.

Michelle Ntampaka
Michelle Ntampaka
Space Telescope Science Institute

The Importance of Being Interpretable: ML as a Partner in Cosmological Discovery

Abstract: ML could play a crucial role in the next decade of cosmology, leading to transformative discoveries from astronomy’s rich, upcoming survey data.  While ML has historically been touted as a black box that can generate order-of-magnitude improvements at the cost of interpretability, this does not need to be the case – modern techniques are making it possible to develop ML tools that improve results while still being understandable and leading to physical discoveries. In this talk, I will describe understandable models for interpreting cosmological large scale structure. I will show examples of how machine learning can be used, not just as a tool for getting “better” results at the expense of understanding, but as a partner that can point us toward physical discovery.

Short Bio: Dr. Michelle Ntampaka is the Deputy Head of the Data Science Mission Office at Space Telescope Science Institute.  Her first career was in physics education, and she returned to graduate school after a decade of classroom teaching.  She has run multiple teacher-training programs in Africa, teaching Rwandese high school educators how to use inexpensive, commonly available items as lecture demonstrations to enhance student learning.  After completing her Ph.D. at Carnegie Mellon University, she was a member of the inaugural cohort of Harvard Data Science Fellows.  Dr. Ntampaka’s research focuses on ways to use machine learning as a powerful tool for cosmological discovery. 

Giuseppe Riccio
Giuseppe Riccio
National Institute for Astrophysics - INAF

An advanced infrastructure for scientific and instrumentation data analysis in Astronomy

Abstract: in the last decade, Astronomy has entered the big data era, being the scene of the
realization of panchromatic surveys, with ground-based and space-borne instruments,
characterized by a wide field of view, combined with a very high spatial resolution, capable of
acquiring a huge quantity of exceptional quality and deep data.
This poses two needs: i) integrating advanced data-driven science methodologies for the
automatic exploration of huge data archives, exploiting resources and solutions proposed by
Astroinformatics; ii) to have efficient short- and long-term monitoring and diagnostics
systems, to keep the quality of the observations under control, detecting and limiting
anomalies and malfunctions, and facilitating rapid and effective corrections, in order to
guarantee the correct maintenance of all components and the good health of scientific data
over time. In particular, this is a crucial requirement for space-borne observation systems,
both in logistical and economic terms.
We present AIDA (Advanced Infrastructure for Data Analysis), a portable and modular web
application, designed to provide an efficient and intuitive software framework to support
monitoring of data acquisition systems over time, diagnostics and both scientific and
engineering data quality analysis, especially suitable for astronomical instruments. Its
versatility makes it possible to extend its functionalities, by integrating and customizing
monitoring and diagnostics systems, as well as scientific data analysis solutions, including
machine/deep learning and data mining techniques and methods.
Due to these properties, a specific version of AIDA is already the official monitoring and
analysis tool for the ESA Euclid space mission and another one is going to be used for the
commissioning of the V. Rubin Telescope.

Short Bio: tba

Nome Francesco Tacchino
Nome Francesco Tacchino
IBM Research Zurich

Quantum computing for natural sciences and machine learning applications

Abstract:Over the last few decades, quantum information processing has emerged as a gateway towards new, powerful approaches to scientific computing. Quantum technologies are nowadays experiencing a rapid development and could lead to effective solutions in different domains including physics, chemistry, and life sciences, as well as optimization, artificial intelligence, and finance. To achieve this goal, noise-resilient quantum algorithms together with error mitigation schemes have been proposed and implemented in hybrid workflows with the aim of improving the synergies between quantum and classical computational platforms. In this talk, I will review the state-of-the-art and recent progress in the field, both in terms of hardware and software, and present a broad spectrum of potential applications, with a focus on natural sciences and machine learning.

Short Bio: tba

Gennaro Zanfardino
Gennaro Zanfardino
Virtualitics

High-Dimensional Data Visualization and AI-Enhanced Exploratory Data Analysis in a Commercial Software Solution

The talk will cover the innovative techniques for exploring and analyzing high-dimensional data within the context of a commercial visualization software. I will start with the challenges of rendering network graphs, scatter plots, and a multitude of other data representations on both desktop and VR platforms, while aiming to offer ergonomic user interaction and visualization.

I will continue by introducing the AI-driven analytical tools that the proposed solution provides for identifying and analyzing complex network structures and data patterns. The presented methods are versatile and capable of automatically extracting insights from diverse multi-dimensional datasets—ranging from categorical and numerical data to natural language data—across a wide spectrum of domains; the versatility comes with inevitable trade-offs that I hope will ignite discussions on how to effectively mitigate these challenges, especially at scale.

Short Bio: He is currently serving as a Senior Machine Learning Contractor at Virtualitics, Inc., a role he has held since 2021 while he earned his M.S. (summa cum laude) degree in Computer Science from the University of Salerno. He shortly held a research associate position at the University of Portsmouth in 2019 and is currently pursuing his Ph.D. in the field of digital twins for territory management and disaster response at the University of L’Aquila.

identifying section headings

Sponsored by:

identifying section headings

identifying section headings