Program & Abstracts:

Speakers & Overview:

P2M Symposium 2019 - AFTERMOVIE



Susanne MICHL

Charité, University Medical Center Berlin, GERMANY

Susanne Michl is professor for Medical Ethics and Medical Humanities at the Charité – University Medical Center in Berlin. From 2007 to 2017 she was a lecturer for medical history and ethics at different Institutes for Medical History and Ethics in Germany (Tübingen, Greifswald, Göttingen and Mainz). As a certified ethics consultant she also was a member of different clinical ethics committees. At Greifswald she joined the GANI_MED consortium (Greifswald approach to Individualized Medicine). Her research interests include history of pharmacogenetics, social and historical studies of personalized medicine, and clinical ethics.

Abstract – Historical perspectives of individualized Medicine

Since the late 1990s, the term “Personalized Medicine” has been coined to enable collaborations between different stakeholders in and outside research units. It has since then become a flourishing field under various names such as Individualized Medicine, Precision Medicine or 4P Medicine. As a concept, it constitutes an imaginary framework of expectations and claims for a better, patientcentered and efficient health care system. In my talk I would like to shed light on past framings of individualized treatment and research, the role of pharmacogenetics and of some key figures such as Archibald E. Garrod and Werner Kalow. Instead of tracing back the history of Personalized Medicine to a supposed beginning, I want to consider research concepts centered upon the key category of “(bio-)chemical individuality” and “human variability” as a cultural framework of visions, expectations and normative claims.

Catherine BOURGAIN

INSERM, Cermes3, Paris, FRANCE

Catherine Bourgain is a researcher in human genetics and statistics at the Institut National de la Santé et de la Recherche Médicale (INSERM) since 2002, and a member of the Inserm ethics committee. In September 2012, she joined Cermes3 (Centre de Recherche Médecine, Sciences, Santé, Santé mentale, Société). Her research focuses on the repositioning of scientific, medical, ethical and economic issues induced by broadband genomics technologies in the context of the development of personalized medicine. She is particularly interested in the evolution of the links between research and clinical practices. In addition, she is the vice-president of the Fondation Sciences Citoyennes, which aims to bring science and innovation into the field of democratic debate, and which is particularly interested in synthetic biology.

Abstract – Societal and Ethical Challenges of Precision Medicine

Precision medicine has the ambition to promote a new medical practice. Medicine is made more efficient through the systematic use of detailed descriptions of the molecular identity of individuals. These promises of a medicine of the future, made more precise and adapted to everyone through scientific and technological progress, have important effects. Public policies combining support for certain types of biomedical research and innovations and new drug regulations are being implemented on their behalf. This concerns in particular the significant public and private investments required to develop the major genome sequencing programs. A variety of clinical practices already fit into this precision medicine category. They all mobilize different technologies to make a diagnosis, decide on a treatment, specify a disease prognosis or predict its occurrence. While genomics is essential, other forms of molecular description are also involved: transcriptomics, proteomics, epigenomics. In practice, the clinical applications of precision medicine differ significantly from the discourses and promises mentioned. Most of them existed before the introduction of the term and have been transformed by the new technological developments. In this presentation we will discuss this gap between discourses and practices of precision medicine and evaluate the ethical and social issues at stake.


Pacific Northwest Research Institute, Seattle, USA

Dr. Nadeau is internationally-recognized for his work on mouse models of human disease. He has been a pioneer in comparative genomics, genetics and systems studies of mouse models of human disease, with an emphasis on transgenerational epigenetic effects on metabolism and cancer, and most recently on phenotypic ‘noise’ in metabolic systems. His work is contributing to a revolution in the ways that we understand inheritance of phenotypic variation and disease susceptibility. He is currently Principal Scientist at the Pacific Northwest Research Institute (Seattle). He is the former chair and James H, Jewell Professor of Genetics at Case Western Reserve University School of Medicine.  He is the founding director of the Mouse Genome Informatics Project and the Mouse Genome Database, founding director of the Ohio GI Cancer Consortium, and founding editor of two research journals, one of which (Systems Biology and Medicine) won the American Publishers Awards for Professional & Scholarly Excellence (PROSE) top award for outstanding scholarly work in all disciplines of the arts and sciences.  The Smithsonian Museum deposited in their permanent collection a copy of the MGI-MGD software as an example of Innovation in Information Technology. Among his recognitions are an National Institutes of Health Pioneer Award and election as Fellow of the American Association for the Advancement of Science.

Abstract - The continuum from mendelian to complex diseases

Precision medicine promises to revolutionize healthcare by using DNA content to personalize diagnosis and management of health and disease. While DNA information may suffice for some simple genetic conditions, the underlying premise is currently unreliable in most circumstances. Each person is a singular story, based on unique inherited factors, environmental exposures and life-style choices. For most genetic conditions, modifier genes, gene-gene, gene-environment and gene-age interactions, transgenerational epigenetic inheritance, and ‘noisy’ genes (variance differences without mean effects) complicate reliable mapping of genotype to phenotype. But we may be at a watershed where a deeper understanding of inheritance and systems genetics will emerge from high-throughput, low-cost genome sequencing, phenotyping, genetic engineering and computing. I will focus on evidence for these genetic phenomena that complicate inheritance and discus the ways that they can be used to improve precision medicine.

Abdel B. HALIM

Biomarkers and Companion Diagnostics Taiho Oncology, NJ, USA

Dr Abdel Halim is an internationally-recognized key opinion leader with 25+ years of experience in different aspects of biomarkers, precision medicine and IVD; from strategic planning to actualization. He is the Vice president of Biomarkers and Companion Diagnostics at Taiho Oncology (Otsuka holding). Before Taiho, Dr. Halim held multiple leadership positions in the pharmaceutical and diagnostic industries. He oversaw the development and validation of assays for several hundreds of biomarkers on different platforms and their applications in 200+ PI-PIV clinical trials and patient managements. Abdel has led 7 CDx programs and has track records for 5 FDA 510K approvals of high complexity IVD and 2 drug approvals. Abdel has served on 20+ governmental and public expert panels and advisory boards in the US, Canada and EU. Dr. Halim has served on 25+ committees to establish guidelines to promote quality in clinical laboratory and diagnostic industries. Abdel has 70+ peer-reviewed publications and 100+ presentations including 40+ invited and keynote speeches in national and international meetings.

Abstract - The Role of Biomarkers in Precision Medicine: Opportunities and challenges

Incredibly high failure rate in the pharmaceutical industry has been positioning biomarkers and precision medicine in the frontline as optimistic rescuers. Successful development and implementation of biomarkers and companion diagnostic strategies can likely mark the difference between winners and losers in this crowded space. To achieve this ambitious goal, some prerequisites should be fulfilled, principally, embracing an effective biomarker strategy as early as possible during the drug development phase and implementation of the right processes. This presentation will highlight the following points;
• Where we stand with the initiative after two decades of the first CDx approval
• Attributes for robust and successful drug-diagnostic co-development
• Some critical but overlooked challenges facing CDx; Case study demonstration


Helsinki University, Helsinki, FINLAND

Olli Carpén is professor of pathology at the University of Helsinki and scientific director at Helsinki biobank since 2015. He studied medicine at the University of Helsinki and continued his studies at Harvard Medical School. After returning to Finland, he specialized in pathology. From 2004 he served as a professor of pathology, and from 2013 as professor of biobank research at the University of Turku. He has been elementary in creating the Finnish biobank network and providing expertise within the international biobanking community. He is passionate in implementing biobanking into health care and personalised medicine research. His additional research interest is cancer, especially mechanisms of chemoresistance and discovery of prognostic and predictive biomarkers.

Abstract - Biobanking in Precision Medicine

Precision medicine will remain an empty concept, unless sufficiently large-scale disease-related sample sets are collected and analysed across the world, and the information fluently translated to the health care system. Especially, longitudinal phenotype information obtained from electronic hospital records, when combined with biological specimens, will provide an essential toolkit for understanding individual variation within disease entities. These databases in combination with artificial intelligence tools and user interphases, allow comparison of an individual’s disease profile to a reference group, and provide evidence-based predictions of disease outcome and optimal 5 treatments. To achieve this goal, hospital-integrated biobanks, connected with comprehensive electronic health records, provide an elementary platform. I will describe the Finnish nationwide hospital biobank network’s “consent all comers” approach for bringing tools for early recognition, successful targeted treatments, and effective preventive strategies to a variety of diseases. Finally, I will provide examples of the possibilities, challenges and achievements along our road towards precision medicine ecosystem.


Geneva University Hospital, Geneva, SWITZERALAND

Nicolas Vuilleumier is a Professor at the Faculty of Medicine UNIGE and Head of the Laboratory Medicine Department at HUG. Nicolas Vuilleumier obtained a medical degree in 1999 at the University of Geneva. He then completed a specialization in internal medicine and a second in laboratory medicine. He also obtained a doctorate in medicine and a master’s degree in medical biology. After a two-year post-doctoral training at the Karolinska Institute (Sweden), from 2006 to 2008, he returned to Geneva where he continued his research on the role of humoral autoimmunity in atherogenesis, in parallel with his clinical activities in the Laboratory Medicine Department of HUG, and teaching. More recently, he is working on the unification of biobanks and biospécimens as part of specialized medicine. His work has been supported by the SNSF since 2011 and he is the winner of the Leenaards Prize in 2013. At the hospital level, he has held the position ad interim of Chief Medical Officer of the HUG Laboratory Medicine Department since 2013, before being appointed to this position in 2018. His a the acting president of the Swiss Society of Clinical Chemistry.

Abstract - Laboratory Medicine as a Key Driver in Precision Medicine

Since the mid of the 20th century, laboratory medicine has been playing an ever growing role in providing -through biomarkers results- critical medical information allowing physicians to assess patients’ diagnosis, prognosis, therapeutic response, as well as producing meaningful preventive or predictive information directly impacting patients’ routine management. Being genuinely biomarker-driven, the precision medicine paradigm dedicated to improving diagnostic accuracy by considering both patient and the disease characteristics to optimize patients’ therapeutic response and safety, has the same goal than routine laboratory medicine, which can be summarized as: “get the right results/diagnosis for the right patient, at the right time, to provide the right treatment”. The major difference resides in the volume and complexity of biomarkerderived data needing to be integrated in precision medicine. Among the numerous challenges precision medicine will face, building accurate medical decision tools based upon the intrinsic and cumulative inaccuracy and imprecision plaguing any biomarker results could even be perceived as a paradox, especially because biomarker-derived data are believed to be the oil of precision medicine. As “n=1” trials are likely to become the standard of precision medicine studies, defining the amount of change requested for a given algorithm to drive a clinical/therapeutic action for a given patient will become even more crucial. Deviations from baseline individual values are likely to become more meaningful than any deviation from a population-based interval reference range or cut-off points. Clinical integration and viability of such paradigm shift will require clear-cut operating instructions which will have to be based upon on the knowledge of the intrinsic total variability of these algorithms.
This presentation will present how laboratory medicine expertise in quantifying and minimizing biomarkers results inaccuracy through the continuous operational optimization (processes standardization/harmonization, automation/consolidation, quality insurance policies) could be a key structuring player to transform raw biomarker-derived data into high-quality combustible to fuel the rocket of precision medicine, thereby maximizing the chances of transforming the precision medicine initiative into a successful moon shot.


Göttingen University, Göttingen, GERMANY

Jürgen Brockmöller was born in 1958 in Bonn, Germany. After his medical studies and approval as physician, he worked from 1993 to 1997 as PhD student at the Max-Planck-Institute of Molecular Genetics in Berlin on the structure of ribosomal proteins. From 1987 to 1993, Jürgen Brockmöller was postdoctoral fellow at the Institute of Clinical Pharmacology at the University Medical Center Benjamin Franklin in Berlin. He has the board certification in Clinical Pharmacology and his research focussed on pharmacogenetics and pharmacogenomics with emphasis on pharmacokinetics, drug transport and drug metabolism. From 1993 to 2000, Dr. Brockmöller was senior lecturer at the University Medical Center Charité in Berlin and since 2000, Dr. Brockmöller is director of the Institute of Clinical Pharmacology at the Georg August University Göttingen, Germany. Dr. Brockmöller is chairman of the Ethics committee at the Medical Faculty of the Georg-August-University Göttingen. From 2005 to 2015, Dr. Brockmöller was chairman of a postgraduate training and research initiative on pharmacogenetics in oncology supported by the German Research Foundation. He served as a scientific advisor and as a member of scientific advisory boards and safety committees for several pharmaceutical companies. The department of clinical pharmacology under the directorship of Dr. Brockmöller has the approval for postgraduate training in clinical pharmacology. Dr. Brockmöller has contributed to more than 260 peer reviewed scientific papers in the fields of molecular genetic epidemiology, pharmacoepidemiology, molecular and functional genetics, and molecular and clinical studies on genetically polymorphic drug membrane transport and drug metabolism. Present research priority of the Institute of Clinical Pharmacology of the University Medicine Göttingen is functional genomics with focus on drug membrane transport and drug biotransformation. Both, with scientific projects and our daily activities, we want to contribute to the clinical application of pharmacogenetic diagnostics in individualized medicine.

Abstract - The Role of Drug Monitoring in Precision Medicine

Therapeutic drug monitoring (TDM) is one of the oldest approaches in precision medicine. This presentation aims to give an overview on achievements, limitations and future perspectives of TDM. What is TDM? In its narrow definition, it is drug concentration monitoring (TDcM) in body fluids and therapy (drug dose) adjustments according to that. In its broad definition, it is any therapeutic drug effect monitoring (TDeM) including all indicators of therapeutic and adverse drug effects to be considered for an individually optimized drug therapy. TDcM started to become part of practical medicine around 1970. As illustrated with immuno¬suppressant drugs, antiinfective drugs and psychotropic drugs, it is immediately evident that TDcM can identify massive overdosing or underdosing, for instance due to dosing errors, drug drug interactions, genomic variation, or noncompliance. However, TDcM is apparently of limited value only in prediction of nonresponse to drug treatment. For instance, roughly 50% of patients do not sufficiently respond to psychotropic drugs in spite of drug blood concentrations mostly within the ranges considered to be effective. Reasons behind that may include still not fully understood disease heterogeneity and wide variation in the target pathways. One additional reason for those limitations of TDcM may be that we measure at the wrong place. For instance, immunosuppressant drugs should be measured within the immune cells, antibiotics at the centre of tissue infections, psychotropics within the brain. In our study on personalized immunosuppression, we compared intracellular TDcM with conventional TDcM in 160 patients after liver transplantation. Disappointingly, both, intracellular and conventional TDcM were not very good in prediction of rejections (therapy failure) and infections (adverse drug effects). The situation is similar in other areas of TDcM: It is apparently difficult to proof the value of a basically very convincing concept in clinical trials. That is due to different factors including statistical power issues and complex study design issues including feasibility issues with frequent enough monitoring in long term drug treatments. In our study on personalized immunosuppression we also investigated graft-derived cell-free DNA as a noninvasive early rejection and graft damage marker. Cell-free DNA-based biomarkers may indeed be superior to conventional  markers in organ transplantation, but even much greater are the results and expectations in cell-free DNA based biomarkers for drug monitoring in cancer therapy. One of the big hopes and reasons behind TDM is the burden from adverse drug effects. Death from adverse drug reactions may still rank in the 6th place of all causes of death following heat disease, cancer, stroke, lung disease and accidents. Apparently, TDeM would be the most appropriate approach to reduce that burden. Approaches include clinical biochemistry and hematology monitoring, blood pressure, heart rate, ecg monitoring and even monitoring for disturbances in coordination and cognitive abilities. Although these simple biomarkers of adverse drug effects are generally available (many of them at a low to negligible price) people die from adverse drug effects because physicians sometimes simply forget looking at and considering these markers. Several of such markers are already now available in the smartwatches and in many types of diseases this will become the drug monitoring of the future. Both, with TDeM and with TDcM, any presentation has to include thinking about the appropriate informatics infrastructure and quantitative pharmacology approaches. Most ingenious principles of TDcM based dose adjustments were already developed by Lewis Sheiner and colleagues 40 years ago, but still most TDcM is applied and interpreted in a very conventional fashion. Increasing availability of hospital information systems may bring indeed many new perspectives for TDcM and TDeM. 

Abdellah TEBANI

Science for Life Laboratory, KTH, Stockholm, SWEDEN
Rouen University, Rouen, FRANCE

Abdellah TEBANI, PharmD, PhD, is trained as a clinical chemist with a fellowship in Inborn Errors of Metabolism at Robert Debré Hospital (APHP) earning an Advanced degree in Medical Biology from Paris-Sud University, France. Meanwhile, he obtained a Master degree in Analytical Development Strategies and chemometrics from School of Pharmacy, Paris-Sud University and then a PhD in Medical Sciences (Clinical omics) from Normandie University. As a clinical chemist at the Department of Metabolic Biochemistry at Rouen University Hospital he developed and implemented high-throughput metabolomics strategies for the diagnosis of Inborn Errors of Metabolism using machine learning techniques and mass spectrometry along with other omics. As an instructor in Medical Biochemistry at Rouen School of Medicine, he’s also actively involved in education, outreach, and teaching. Combining his medical background and data analytics passion, he’s focused on designing and delivering courses to undergraduate students and medical residents. They cover biochemistry, clinical chemistry, chemometrics, data analytics, systems biology and omics-based tools in research and clinical settings. He is now a postdoctoral fellow at SciLifeLab in Stockholm (Sweden) working on integrative omics and Precision Medicine. Abdellah TEBANI is passionate about innovations in artificial intelligence technologies, digital health and data sciences, and defines himself as a Precision Medicine enthusiast.

Abstract - Data Sciences and Predictive Analytics in Precision Medicine

Everyone likes the concept of gazing into a crystal ball to learn what will happen in the future. The history of medicine of the Middle Ages taught us that healthcare was centered on mystical seers who deliver medical advices. With the increase in health data, health professionals have new kinds of technology to collect, analyze, and use health information. So, our modern era science has, luckily, replaced the crystal ball. Science can look into the future to the extent of events that have happened in the past. Gaining new insights from old data requires the complicated analysis of many interacting factors in medicine and health care far beyond human cognitive abilities, but current computers and the large amount of available data (big data) can do it almost effortlessly. Data sciences is an interdisciplinary field about processes and systems to extract knowledge or insights from data in various forms, either structured or unstructured using computing, data mining, statistics, machine learning or other artificial intelligence techniques. It involves developing techniques to efficiently process and analyze data to produce summative results that can then be used to improve health outcomes. The purpose of the data analyses is no longer simply answering existing questions, but also unveiling novel ones and generating new hypotheses. Predictive analytics is the area of data mining concerned with forecasting probabilities and trends. This is enabled by the use of high quality labeled big data, enhanced computing power and extended storage capacities. These data-driven innovations impact deeply clinicians predominantly through the automation of patterned-based tasks such as rapid, accurate image interpretation. Improving hospital workflows has the potential to enhance safety reducing medical errors and optimizing clinical pathways for better quality and cost effectiveness. These technologies may also allow better patient participation in promoting their own health. The real-world clinical implementation of these technologies has not yet become a mainstream and more prospective clinical validations through randomized clinical trials are needed. The limitations including talent gap, data standardization, privacy, safety and to some extent, lack of transparency in the algorithms are the key practical issues for both patients and health professionals. Clinicians and healthcare providers are thrilled and notice the potential that these deep changes can bring into healthcare practice. However, many are still frustrated and concerned about the patient–doctor relationship that could be eroded. Thus, promoting literacy in all these data-related aspects should be encouraged in medical education, biomedical research, and public health training. This will prepare well informed and strongly trained professionals who are poised to evolve in a competitive data rich healthcare ecosystem. This talk will present the basics of predictive analytics tools and their current and potential applications. It will also dissect major flaws and challenges for effective implementation

Philippe LAMBIN

Maastricht University, Maastricht, NETHERLANDS

Philippe Lambin is a Clinician, Radiation Oncologist, “ERC advanced & ERC PoC grant laureate”, co-inventor of Radiomics and Distributed learning and pioneer in translational research with a focus on tumour hypoxia and immunotherapy. He has a PhD in Molecular Radiation Biology and is Professor at the University of Maastricht (Radiation Oncology) He is co-author of more than 458 peer reviewed scientific papers (Hirsch Index: 85 Google scholar), co-inventor of more than 18 patents (filed or submitted) and (co) promoter of more than 50 completed PhD’s. He is currently involved in several successful European grants (e.g. Metoxia, Eureca, Artforce, Radiate, Quick-Concept, Requite, BD2decide, Predict) including ImmunoSABR (6 millions €, a multicentric randomized trial in metastatic lung cancer comparing radiotherapy plus or minus immunotherapy with immunocytokine). More recently, his interests have been directed towards hypoxia targeting, Hypoxia Activated Prodrugs, during immunotherapy. He is one of the inventor of “Distributed learning” a revolutionary Big Data approach for health care (watch the animation of a project he managed) and “Radiomics” (watch the animation).

Abstract - Radiomics: Bridging Medical Imaging and Precision Medicine

The rise of radiomics, the high-throughput mining of quantitative image features from (standardof-care) medical imaging for knowledge extraction and application within clinical decision support systems (animation: to improve diagnostic, prognostic, and predictive accuracy, has significant and substantial implications for the medical community (1, 2, 5). Radiomic analysis exploits sophisticated image analysis tools and the exponential growth of medical imaging data to develop and validate powerful image-based signatures/models. We will describe the process of radiomics, its pitfalls, challenges, opportunities, and its capacity to improve clinical decision making (presently primarily in the care of patients with cancer, however, all imaged patients may benefit from quantitative radiology) (5,8). Finally, the field of radiomics is emerging rapidly; however, the field lacks standardized evaluation of both the scientific integrity and the clinical significance of the numerous published radiomics investigations resulting from this growth. There is a clear and present need for rigorous evaluation criteria and reporting guidelines in order for radiomics to mature as a discipline (see Certain author’s proposed that radiomics could be used as a “virtual biopsy”. It could be the case in the sense that several reports demonstrated that biological features of tumours such as EGFR mutations, HPV status and even hypoxia could be quantified by radiomics (6). There are however two main differences: a) Radiomics is based on the whole tumour in contrast to a biopsy taken most often randomly in an heterogeneous tumour and b) the radiomics values is a continuous variable in contrast to molecular biology assays which are often dichotomized (e.g. mt vs wt). Interestingly, certain radiomics signatures e.g. a proliferation radiomics signature, works as well with cone beam CT which opens the field of “4D-Radiomics” (4, 7). The next step is however a “totalomisc” approach in which radiomics signatures will be used in a multifactorial Decision Support System for both diagnostic or theragnostic questions (3, 9, 10).

1. Lambin, P. et al. Radiomics: extracting more information from medical images using advanced feature analysis. Eur J Cancer 48, 441-6 (2012).
2. Aerts, H. et al. Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach. Nat. Comm. 5 (2014).
3. Lambin P. et al. Radiomics: the bridge between medical imaging and personalized medicine. Nat Rev Clin Oncol 2017. Dec;14(12):749-762.
4. van Timmeren JE et al. Survival prediction of non-small cell lung cancer patients using radiomics analyses of cone-beam CT images. Radiother Oncol. 2017 May 12.
5. Lambin, P. et al. . Decision support systems for personalized and participative radiation oncology. Adv Drug Deliv Rev. 2016 Jan 14.
6. Grossmann P et al. Defining the biological basis of radiomic phenotypes in lung cancer. Elife. 2017 Jul 21;6. pii: e23421. doi: 10.7554/eLife.23421. [Epub ahead of print]
7. van Timmeren JE, Leijenaar RTH, van Elmpt W, Reymen B, Lambin P. Feature selection methodology for longitudinal cone-beam CT radiomics. Acta Oncol. 2017 Aug 22:1-7. doi: 10.1080/0284186X.2017.1350285. [Epub ahead of print]
8. Larue RTHM, Van De Voorde L, van Timmeren JE, Leijenaar RTH, Berbée M, Sosef MN, Schreurs WMJ, van Elmpt W, Lambin P. 4DCT imaging to assess radiomics feature stability: An investigation for thoracic cancers. Radiother Oncol. 2017 Aug 7. pii: S0167-8140(17)32482-9. doi: 10.1016/j.
9. Lambin P. et al.. Predicting outcomes in radiation oncology-multifactorial decision support systems. Nature Reviews Clinical Oncology. 2013 Jan;10(1):27-40.
10. van Wijk Y, Vanneste BGL, Jochems A, Walsh S, Oberije CJ, Pinkawa M, Ramaekers BLT, Vega A, Lambin P. Development of an isotoxic decision support system integrating genetic markers of toxicity for the implantation of a rectum spacer. Acta Oncol. 2018 Jun 28:1-7.


Hospital for Sick Children – University of Toronto – Toronto, CANADA

Peter Laussen is Chief of the Department of Critical Care Medicine at the Hospital for Sick Children (SickKids), Professor in Anaesthesia at the University of Toronto and holds the David and Stacey Cynamon Chair in Critical Care Medicine. He is a Senior Associate Scientist at the Research Institute and co-chair of the Artificial Intelligence in Medicine steering committee at SickKids. He graduated from Melbourne University Medical School, Australia, in 1980, and completed fellowships in anesthesia and pediatric critical care medicine before moving to Boston Children’s Hospital, Massachusetts, in 1992. In 2002 he was appointed Chief of the Division of Cardiovascular Critical Care and to the Dolly D. Hansen Chair in Pediatric Anesthesia at Boston Children’s Hospital, and in 2008 was appointed as Professor of Anaesthesia at Harvard Medical School. In 2012, Dr. Laussen accepted his current position at SickKids. Dr. Laussen has extensive experience with clinical research and educational activities in cardiac critical care and anaesthesia. Over the past decade he has focused on systems and human engineering applied to critical care, and the use of high and low frequency physiologic signals for predictive modelling in pediatric critical care; he is the lead developer of an innovative web-based data visualization platform called T3 (Tracking Trajectory Trigger tool). Dr Laussen is a co-founder in 2006 of the international “Risky Business” risk management, safety and quality conferences ( which brings together leaders in high-risk industries to discuss ways of improving the safety and quality of health care.

Abstract - The Patient’s Journey in Intensive Care

Critical care units are dynamic, complex and resource intense environments, where humans directly interface with technology. Decisions are time-sensitive and often occur in circumstances where clinicians need to quickly aggregate and integrate data from multiple sources. In turn there can be variability in practice and uncertainty in management, whether it be related to the patient disease and diagnosis, the patient physiologic state and unpredictable responses to management, and to the ability of clinical teams to interpret data correctly. In addition, there are numerous competing pressures inherent to any ICU environment around work flow, communication, distraction and resource utilization.
The continuous physiologic data streaming from devices and monitors are time-series data, data in motion. They are characterized by a number of features: huge in volume and velocity, signals are variable in frequency and subject to considerable artifacts. No question that these data are
essential for supporting decision at the bedside, but when it comes to modelling for the purpose of real-time (bedside) predictive analyses, the ability to capture, label and integrate these data is limited. The data is messy; hard to manage, store and retrieve. The problem of volume has placed constraints around data storage and retrieval, and there are bottlenecks as the data is input/output (I/O) bound. In our department, we have built and deployed a bespoke data management platform to facilitate collection, file indexing, compression and decompression (see As an indication of the scale of continuous physiologic data that is measured and can be collected, and depending on the complexity of disease and treatment, there are routinely 500+ signals per hour and between 70-150 million physiologic data
points per day generated in the 42 beds of our paediatric intensive care unit. Over 700,000 hours and 2 trillion data points from over 4000 patients is currently stored in the database.
Data Analysis
There are broad categories for using these data in critical care. Some of these include: 1) Describing the physiologic phenotype and individualized physiology according to age, disease, treatment and time, 2.) Understand the physiologic state, such as risk for low oxygen delivery, hemodynamicinstability, effectiveness of mechanical ventilation, risk for neurologic injury, and metabolic state, 3.) Develop early warning systems and the risk for an event within a physiologic state through recognizing patterns in the data, such as risk for sepsis or cardiac arrest, 4.) Track the trajectory of a patient in response to treatment protocols, and direct specific interventions according to a change in the expected trajectory, 5.) Develop decision support and business analytic tools to ensure the efficiencies of care against validated outcome metrics, such as length of stay and risk for readmission, 6.) Enhanced signal processing and waveform analysis such as to diagnose changes in heart rhythm and uncover previously hidden signals that may be embedded within composite waveforms, and 7.) Develop new insights into the underlying physiology, tease out sub-populations of patients that may respond to a particular treatment, and develop prognostic and predictive enrichment strategies that can lead to individualized and precise critical care management. The promise and problem with physiologic data.
There is a disconnect between how we make decisions and the trust we might otherwise have in a model or algorithm. Models can be opaque, containing weighted features, components and simplifications with blind spots related to the inputs and the priorities and judgement of their 11 creators. They are mathematical outputs that may not take into consideration of conditions and behaviours. The risk therefore may be incorrect assumptions and spurious correlations, reinforced and contaminated by bias. Models need feedback of mistakes and of the results and outcomes; they need to be explainable, scalable and context sensitive.
It is nevertheless possible to utilize the data generated by continuous physiologic signals at the bedside to help us understand physiologic states and phenotypes in critical care. At the same time, it is important to also understand that using big physiologic data to determine these states will not replace the clinician at the bedside, rather it will augment our decision making, improve communication and information transfer.


Medical University of Graz, AUSTRIA, Averbis GmbH, GERMANY

Stefan Schulz holds a professorship of Medical Informatics at Medical University of Graz, Austria. In 1990, he graduated from medical school at University of Heidelberg, Germany, where he also earned a doctorate in theoretical medicine. After a short period of clinical practice, he specialized in biomedical and health informatics at University of Freiburg, Germany. Since then, his R & D activities have focused on the use of semantics, knowledge representation and human language technologies. His main goal has been to contribute to the interoperability and reusability of clinical and biomedical research data, which are unstructured to a large amount. This effort is mirrored by (co-)authorship of more than 250 peer-reviewed publications, several awards, and the participation in numerous national and international research consortia, e.g. currently the EU project PRECISE4Q and the COST action Gene Regulation Knowledge Commons.
Stefan Schulz has contributed to the development of clinical terminology standards such as WHO classifications and SNOMED CT, with a focus on refining their ontological foundations. He is currently leading a project on semantic clinical data enhancement for biomarker research at the Austrian Biomarker Research Centre CBmed, in which he extended the notion of “digital biomarkers” to clinical data extracts. Besides his academic position, he holds a position as director of research at the German software company Averbis GmbH, where he is involved in three large German Medical Informatics Initiative consortia dedicated to clinical data integration.

Abstract - Clinical Informatics Challenges in Precision Medicine

Information technology has transformed our digital footprints into an important commodity. Non-linear machine learning modelling approaches are increasingly addressing the challenge presented by unstructured, noisy and incomplete data. There is evidence that data from social network posts and biosensors can be used for health-related predictions. However, we also leave our digital footprints in electronic health records (EHRs), which constitute a main topic of interest in clinical informatics. Precision medicine means tailoring health care assets and services to the individual patient. This requires, frequently, the stratification of patients by phenotypical features, lifestyle characteristics as well as features from health care processes like medication or therapies. Harvesting meaningful and interoperable information from raw texts in EHRs is often unavoidable, which poses challenges to human language engineering tools and resources. Clinical language is overly compact and context-dependent, often misspelt or mistyped, and uses idiosyncratic expressions that vary between languages, dialects, professional groups and clinical disciplines. Thus, the success of information extraction requires not only robust and context-sensitive methods but also sources of real clinical language such as custom dictionaries and annotated corpora. Much of these resources are underrepresented for languages other than English. The target representation matters as well. Ideally, it should follow standards and clearly distinguish between the meaning of a term and its context, e.g. negation, uncertainty, temporal reference. A current effort is described, in which information extraction is used in the context of the Austrian biomarker research centre CBmed. Text analytics software processes large amounts of discharge summaries and annotates them with codes of the clinical terminology standard SNOMED CT, which not only allows representing subtle distinctions in meaning but also aggregations along different semantic axes. Manual creation of dictionaries constitutes a major bottleneck, but it can increasingly be supported by deep learning approaches, e.g. for the resolution of ambiguous short forms.


Radboud University Medical Centre, Nijmegen, NETHERLANDS

Prof Ron Wevers holds a chair in Clinical Chemistry at the Radboud University Medical Centre in Nijmegen in the Netherlands. The focus of this chair is on clinical chemistry of inborn errors of metabolism and more specifically on neurometabolism. Until recently he was member of the Dutch Health Council (2003-2015), member of the Praesidium com­mittee of the Dutch Health Council (2013-2015) and was board member of the Nijmegen Clinical Genetics Centre 2005-2017. Prof Wevers was trained as a clinical chemist in Utrecht and works in Nijmegen since 1981. In 2001 he ob­tained an addi­tional professional regi­stra­tion as clinical biochemical geneticist. Currently Prof Wevers was head of the Translational Metabolic Laboratory (TML) in the depart­ment of Laboratory Me­di­ci­ne in Nijmegen. The TML labo­ratory is a fully accredited mixed function labo­ratory with patient care and translational research. The laboratory has approximately 140 fte including 15 staff members among whom six academic chairs. Per year there are 150-200 peer reviewed scientific papers from this laboratory. Around 480 scientific papers (co)authored by Prof Wevers appeared in peer reviewed journals (source: pubmed). The papers have been cited 18929 times. Prof Wevers gave 180 international lectures. His H-index amounts to 72 (source: Google scholar). Prof Wevers is retired since august 2017.

Abstract - The genome, the metabolome and beyond

Traditionally the diag¬nosis of metabolic disorders relies on targeted analyses of metabolite groups like amino acids and organic acids usually in body fluids. Although more or less noninvasive this approach delivers only a narrow view on the repertoire of metabolites and pathways present in cells and tissues. As such it is not comprehensive and not very cost effective. The study of metabolism at the «-omics» level, so-called untargeted metabolomics, performs better. It holds a promise to have a profound impact on medical practice. The metabolome is largely determined by what has been encoded by the genome while being modified by diet, environmental factors and by the gut microbiome. The metabolic profile provides a quantifiable readout of the metabolic pathways. At the center of metabolomics, is the concept that a person’s metabolic state provides a close representation of that individual’s overall health status. Genetic defects will derange the normal physiological state and often lead to disease. A substantial part of our genome is devoted to maintain the required metabolic pathways in our body. Metabolic diseases can present at any age and in many different clinical forms making life difficult for health care professionals to find the correct diagnosis. Here untargeted metabolomics can be of help. Modern analytical techniques like NMR and LC-MS provide a holistic overview of the metabolome and can pinpoint a molecular derangement of human metabolism at an unprecedented precision and accuracy. This will further improve when the intracellular metabolome (I like to call this the “forgotten metabolome”) is taken into account by applying untargeted metabolomics on cell homogenates. Untargeted metabolomics techniques will teach us as yet unknown biomarkers for many known IEMs thus improving our understanding of their pathophysiology and providing targets for improving therapeutic strategies. Exploring the human body fluid metabo¬lome has already shown how little we know. Surprisingly significant numbers of unknown “features” pop up deriving from mole¬cular species that cannot yet be annotated with the help of available public databases. The lecture will show approachesto unravel the molecular identity of the metabolites behind such “unknowns”, also called Features of Unknown Significance (FUS). This is a major challenge that calls for international collaboration between metabolic labs. Answers can be provided by using the structural information from NMR spectra, MSn spectra and infrared spectra. This can be done in Nijmegen by the collaboration between the Felix laboratory and the Translational Metabolic Laboratory. Solving the identity of as yet unknown disease biomarkers will provide a deeper understanding of the disease mechanism in individual patients. We have coined the novel untargeted metabolomics approach to IEMs as Next Generation Metabolite Screening (NGMS). Using metabolomics in parallel with whole exome sequencing will enable a higher diagnostic yield. Glycomics- and lipidomics techniques can add further essential diagnostic information.

Jean-Louis GUEANT

Lorraine University, Nancy, FRANCE

Jean-Louis Guéant, MD, DSc, is Professor of Medical Biochemistry-Molecular Biology at the University of Lorraine, Faculty of Medicine, Director of UMR-S Inserm 1256, Head of the Department of Biochemistry-Molecular Biology-Nutrition at the University Hospital of Nancy. He followed a MD, PhD double cursus at the Faculty of Medicine and the Faculty of Sciences of the University Henri Poincaré of Nancy, with the qualification in Hepato-Gastroenterology in 1984, and the degree of Doctor of Science in Nutrition in 1986. He is recruited as Master of Conferences – Hospitaller Practitioner in 1987 and appointed Professor of Universities – Hospital Practitioner in 1990. Prior to this appointment, he made several post doc stays at the State University of New York and the Institute Minerva of Helsinki (with R Gräsbeck as mentor). He then directs one of the 3 research teams of Inserm Unit 308 (director JP Nicolas) from 1988 and obtains the creation of an EP-CNRS team in 1996, which will become EMI-Inserm in 1999, then UMR-S Inserm 724 in 2002, UMR_S Inserm 954 in 2009 and UMR_S Inserm 1256. This Inserm Unit brings together nearly 70 people, including 3 Inserm researchers and 36 academic investigators in experimental and clinical research, around a research project on the interactions between nutrition, genes and the environment. In this context, he has been principal investigator of many national contracts. In hospital, he has contributed to the creation of the Reference Center for Rare Diseases of Metabolism in which he carries out a very specialized medical consultation on rare metabolic diseases with a regional, national and international recruitments of patients.

Abstract - A multi-omic approach reveals a new type of inherited errors of vitamin B12 metabolism

Vitamin B12 (B12, cobalamin (Cbl)) is a water-soluble vitamin that requires complex mechanisms for its assimilation, blood transport and intracellular metabolism. In mammalian cells, only two enzymes depend on vitamin B12: L- methylmalonyl – CoA mutase (EC in the mitochondrion and methionine synthase (EC, in the cytoplasm. Inherited defects result produce a wide spectrum of clinical manifestations that includes cardiometabolic decompensation, megaloblastic anemia and neurological manifestations, and a metabolic profile that includes the accumulation of homocysteine (HCy) and /or methylmalonic acid (MMA). More than a dozen of genes are involved in the intracellular metabolism of B12, corresponding to several disease groups named in cblA to cblJ. The most common of these diseases, called cblC, is recessive, with identical or composite mutations of the two alleles of MMACHC gene. We have found a new type of cblC that we named epi-cblC in an infant deceased from a severe form of cblC, which paradoxically only showed a mutation in the heterozygous state. By carrying out the genome wide study of methylome we have identified an epimutation on one allele of MMACHC in 3 generations and in the sperm of the father of the index case. To date, most epimutations reported in humans are somatic and are erased in germ cells. The epimutation turns off the expression of the non-mutated allele of the MMACHC gene. MMACHC belongs to a trio of genes. It is flanked by two genes CCDC163P 13 and PRDX1, which are transcribed in opposite sense. The secondary epimutation results from a PRDX1 mutation that forces MMACHC antisense transcription and produces an H3K36me3 mark at the CCDC163P and MMACHC common promoter. We found 7 other cases of epi-cblC in Europe and USA and we identified more than forty gene trios with the same configuration on the whole genome. The 8 cases of epi-cblC illustrate the need to look for an epimutation in gene trios with similar configuration, when patients present typical manifestations of a rare recessive disease despite the presence of a heterozygous mutation.

Mathias UHLEN

Science for Life Laboratory, KTH Royal Institute of Technology, Stockholm, SWEDEN

Mathias Uhlen is professor at the Royal Institute of Technology (KTH), Stockholm, Sweden. His research is focused on protein science, antibody engineering and precision medicine and ranges from basic research in human and microbial biology to more applied research, including clinical applications in cancer, infectious diseases, cardiovascular diseases, autoimmune diseases and neurobiology. He leads an international effort to systematically map the human proteome  to create a Human Protein Atlas ( using antibodies and various omics technologies. This effort has so far resulted in the Tissue Atlas (2015) showing the distribution of proteins across human tissues and organs, the Cell Atlas (2016) showing the subcellular location of human proteins in single cells and the Pathology Atlas (2017) showing how cancer patient survival is tied to RNA and protein levels.. He is the President of the European Federation of Biotechnology and from 2010-2015, he was the founding Director of the Science for Life Laboratory (SciLifeLab) which is a Swedish national center for molecular bioscience.

The Human Protein Atlas
Implications for human biology, drug development and precision medicine

The Human Protein Atlas (HPA) is a Swedish-based program with the aim to map of all the human proteins in cells, tissues and organs using integration of various omics technologies, including genomics, transcriptomics, antibody-based imaging, mass spectrometry-based proteomics and systems biology. The version 17 ( consists of three separate parts, each focusing on a particular aspect of the genome-wide analysis of the human proteins; (1) the Tissue Atlas showing the distribution of the proteins across all major tissues and organs in the human body, (2) the Cell Atlas showing the subcellular localization of proteins in single cells, and the new Pathology Atlas showing the impact of protein levels for survival of patients with cancer. The Human Protein Atlas program has already contributed to several thousands of publications in the field of human biology and disease and it was recently selected by the organization ELIXIR as a European core resource, due to its fundamental importance for a wider life science community. All the data in the knowledge resource is open access to allow scientists both in academia and industry to freely access the data for exploration of the human proteome.
Key publications
1. Uhlen et al (2015) Science 347: 1260419
2. Uhlen et al (2016) Mol Systems Biol., 12: 862
3. Uhlen et al (2016) Nature Methods, 13: 823-7
4. Thul et al (2017) Science 356 (6340): eaal3321
5. Uhlen et al (2017) Science 357 (6352): eaan2507


Head of Biochemical Genetics Head of South Australian Screening Centre Department of Biochemical Genetics, Directorate of Genetics and Molecular Pathology, SAPathology, at the Women’s and Children’s Hospital, Adelaide, South Australia, AUSTRALIA

Enzo Ranieri is the Head of Biochemical Genetics within the Directorate of Genetics & Molecular Pathology, SAPathology at the Women’s & Children’s Hospital in Adelaide, South Australia. He holds an associate academic position at the University of Adelaide within the Faculty of Health & Medical Sciences. He obtained postgraduate higher degrees from Flinders University in the School of Medicine, Department of Neurophysiology and from the Faculty of Health and Medical Sciences at the University of Adelaide.
He has acquired over 25 years experience in Newborn Screening in biochemical genetics with certification in Biochemical Genetics as a Fellow of the Human Genetics Society of Australasia (FHGSA) and was appointed as a Fellow of the Faculty of Science of the Royal College of Pathologists of Australasia (FFScRCPA). He was appointed to the Board of the International Society of Newborn Screening in 2016 and serves on the committee responsible for Asia-Pacific region.
He is a member of the Human Genetics Society of Australasia (HGSA) and Australasian Society of Inborn Errors of Metabolism (ASIEM) and has served as a member on numerous committees and subcommittees for both the HGSA & ASIEM including the American Clinical and Laboratory Standards Institute (CLSI). He has served as a national committee member of the Australasian Newborn Screening and Metabolic Diseases and is currently a member of the standing scientific committee on quality assurance of the International Society of Newborn Screening (ISNS).
He has published numerous articles in leading scientific journals, books and reviews and has been an invited speaker at numerous international and national congresses and meetings as a keynote speaker. He has an international expertise in newborn screening for inborn errors of metabolism (IEM) using tandem mass spectrometry (MSMS) and his laboratory is considered a world leader in the field of neonatal screening being one of the first laboratories in Australia to implement MSMS into routine screening. He was one of the first pioneers and instrumental in the developed the two-tier IRT/DNA screening strategy for Cystic Fibrosis in December 1989. He has a strong interest in teaching and training having had numerous international scientific and clinical trainees spend time in the laboratory to undertake specialised training in all aspects of MSMS newborn screening for IEM. He also has a strong research interest in Metabolomics and the department is a leader in this new emerging field, specifically looking at MSMS to screen, characterise and monitor metabolic diseases.

Retour en haut