Nos tutelles

CNRS Dauphine PSL *



13 mars 2018: 2 événements


  • Séminaires du Pôle 2 : "Optimisation combinatoire, algorithmique"

    Du 26 février 14:00 au 1er avril 15:30 - Henning Fernau

    Séminaires du Pôle 2 : "Optimisation combinatoire, algorithmique"

    inscriptions : 0/0 Inscriptions

    Résumé : Self-monitoring approximation algorithms
    Abstract :
    Reduction rules are one of the key techniques for the design of parameterized algorithms. They can be seen as formalizing a certain kind of heuristic approach to solving hard combinatorial problems.
    We propose to use such a strategy in the area of approximation algorithms.
    One of the features that we may gain is a self-monitoring property.
    This means that the algorithm that is run on a given instance $I$ can predict an approximation factor of the solution produced on $I$ that
    is often (according to experiments) far better than the theoretical estimate that is based on a worst-case analysis.
    Bibliography :
    [1] F. N. Abu-Khzam, C. Bazgan, M. Chopin, and H. Fernau.
    Data reductions and combinatorial bounds for improved approximation
    algorithms. Journal of Computer and System Sciences, 82:503—520, 2016.
    [2] L. Brankovic and H. Fernau.
    A novel parameterised approximation algorithm for minimum vertex cover.
    Theoretical Computer Science, 511:85—108, 2013.

    Lieu : A407

    En savoir plus : Séminaires du Pôle 2 : "Optimisation combinatoire, algorithmique"
  • Séminaires du Pôle 3 : "Sciences des données"

    Mardi 13 mars 14:00-16:00 - Zaineb Chelly - Marie Curie Research Fellow à Aberystwyth University

    Séminaires du Pôle 3 : "Sciences des données"

    Résumé : Over the last decades, the amount of data has increased in an unprecedented rate, leading to a new terminology : "Big Data". Big data are specified by their Volume, Variety, Velocity and by their Veracity/Imprecision. Based on these 4V specificities, it has become difficult to quickly acquire the most useful information from the huge amount of data at hand. Thus, it is necessary to perform data (pre-)processing as a first step. In spite of the existence of many techniques for this task, most of the state-of-the-art methods require additional information for thresholding and are neither able to deal with the big data veracity aspect nor with their computational requirements. This project’s overarching aim is to fill these major research gaps with an optimized framework for big data pre- processing in certain and imprecise contexts. This talk aims at presenting current progress and insights of this Marie Skłodowska Curie project by proposing solutions based on Rough Set Theory for data pre-processing and Randomized Search Heuristics for optimization. The project involves expertise provided by internal and external collaborators from academic and non-academic institutions, namely Prof Lebbah (University of Paris 13), Prof Shen (University of Aberystwyth), Prof Tino (University of Birmingham), Prof Merelo (University of Granada) and an industrial partner from France.

    Notes de dernières minutes : Optimized Framework based on Rough Set Theory for Big Data Pre-processing in Certain and Imprecise Contexts

    En savoir plus : Séminaires du Pôle 3 : "Sciences des données"