---
_id: '2057'
abstract:
- lang: eng
  text: 'In the past few years, a lot of attention has been devoted to multimedia
    indexing by fusing multimodal informations. Two kinds of fusion schemes are generally
    considered: The early fusion and the late fusion. We focus on late classifier
    fusion, where one combines the scores of each modality at the decision level.
    To tackle this problem, we investigate a recent and elegant well-founded quadratic
    program named MinCq coming from the machine learning PAC-Bayesian theory. MinCq
    looks for the weighted combination, over a set of real-valued functions seen as
    voters, leading to the lowest misclassification rate, while maximizing the voters’
    diversity. We propose an extension of MinCq tailored to multimedia indexing. Our
    method is based on an order-preserving pairwise loss adapted to ranking that allows
    us to improve Mean Averaged Precision measure while taking into account the diversity
    of the voters that we want to fuse. We provide evidence that this method is naturally
    adapted to late fusion procedures and confirm the good behavior of our approach
    on the challenging PASCAL VOC’07 benchmark.'
alternative_title:
- LNCS
arxiv: 1
author:
- first_name: Emilie
  full_name: Morvant, Emilie
  id: 4BAC2A72-F248-11E8-B48F-1D18A9856A87
  last_name: Morvant
  orcid: 0000-0002-8301-7240
- first_name: Amaury
  full_name: Habrard, Amaury
  last_name: Habrard
- first_name: Stéphane
  full_name: Ayache, Stéphane
  last_name: Ayache
citation:
  ama: 'Morvant E, Habrard A, Ayache S. Majority vote of diverse classifiers for late
    fusion. In: <i>Lecture Notes in Computer Science (Including Subseries Lecture
    Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)</i>. Vol
    8621. Springer; 2014:153-162. doi:<a href="https://doi.org/10.1007/978-3-662-44415-3_16">10.1007/978-3-662-44415-3_16</a>'
  apa: 'Morvant, E., Habrard, A., &#38; Ayache, S. (2014). Majority vote of diverse
    classifiers for late fusion. In <i>Lecture Notes in Computer Science (including
    subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)</i>
    (Vol. 8621, pp. 153–162). Joensuu, Finland: Springer. <a href="https://doi.org/10.1007/978-3-662-44415-3_16">https://doi.org/10.1007/978-3-662-44415-3_16</a>'
  chicago: Morvant, Emilie, Amaury Habrard, and Stéphane Ayache. “Majority Vote of
    Diverse Classifiers for Late Fusion.” In <i>Lecture Notes in Computer Science
    (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes
    in Bioinformatics)</i>, 8621:153–62. Springer, 2014. <a href="https://doi.org/10.1007/978-3-662-44415-3_16">https://doi.org/10.1007/978-3-662-44415-3_16</a>.
  ieee: E. Morvant, A. Habrard, and S. Ayache, “Majority vote of diverse classifiers
    for late fusion,” in <i>Lecture Notes in Computer Science (including subseries
    Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)</i>,
    Joensuu, Finland, 2014, vol. 8621, pp. 153–162.
  ista: 'Morvant E, Habrard A, Ayache S. 2014. Majority vote of diverse classifiers
    for late fusion. Lecture Notes in Computer Science (including subseries Lecture
    Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). IAPR: International
    Workshop on Structural, Syntactic, and Statistical Pattern Recognition, LNCS,
    vol. 8621, 153–162.'
  mla: Morvant, Emilie, et al. “Majority Vote of Diverse Classifiers for Late Fusion.”
    <i>Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial
    Intelligence and Lecture Notes in Bioinformatics)</i>, vol. 8621, Springer, 2014,
    pp. 153–62, doi:<a href="https://doi.org/10.1007/978-3-662-44415-3_16">10.1007/978-3-662-44415-3_16</a>.
  short: E. Morvant, A. Habrard, S. Ayache, in:, Lecture Notes in Computer Science
    (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes
    in Bioinformatics), Springer, 2014, pp. 153–162.
conference:
  end_date: 2014-08-22
  location: Joensuu, Finland
  name: 'IAPR: International Workshop on Structural, Syntactic, and Statistical Pattern
    Recognition'
  start_date: 2014-08-20
date_created: 2018-12-11T11:55:28Z
date_published: 2014-01-01T00:00:00Z
date_updated: 2021-01-12T06:55:01Z
day: '01'
department:
- _id: ChLa
doi: 10.1007/978-3-662-44415-3_16
ec_funded: 1
external_id:
  arxiv:
  - '1404.7796'
intvolume: '      8621'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: http://arxiv.org/abs/1404.7796
month: '01'
oa: 1
oa_version: Preprint
page: 153 - 162
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
  call_identifier: FP7
  grant_number: '308036'
  name: Lifelong Learning of Visual Scene Understanding
publication: Lecture Notes in Computer Science (including subseries Lecture Notes
  in Artificial Intelligence and Lecture Notes in Bioinformatics)
publication_status: published
publisher: Springer
publist_id: '4989'
quality_controlled: '1'
scopus_import: 1
status: public
title: Majority vote of diverse classifiers for late fusion
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 8621
year: '2014'
...
---
_id: '2160'
abstract:
- lang: eng
  text: Transfer learning has received a lot of attention in the machine learning
    community over the last years, and several effective algorithms have been developed.
    However, relatively little is known about their theoretical properties, especially
    in the setting of lifelong learning, where the goal is to transfer information
    to tasks for which no data have been observed so far. In this work we study lifelong
    learning from a theoretical perspective. Our main result is a PAC-Bayesian generalization
    bound that offers a unified view on existing paradigms for transfer learning,
    such as the transfer of parameters or the transfer of low-dimensional representations.
    We also use the bound to derive two principled lifelong learning algorithms, and
    we show that these yield results comparable with existing methods.
article_processing_charge: No
author:
- first_name: Anastasia
  full_name: Pentina, Anastasia
  id: 42E87FC6-F248-11E8-B48F-1D18A9856A87
  last_name: Pentina
- first_name: Christoph
  full_name: Lampert, Christoph
  id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
  last_name: Lampert
  orcid: 0000-0001-8622-7887
citation:
  ama: 'Pentina A, Lampert C. A PAC-Bayesian bound for Lifelong Learning. In: Vol
    32. ML Research Press; 2014:991-999.'
  apa: 'Pentina, A., &#38; Lampert, C. (2014). A PAC-Bayesian bound for Lifelong Learning
    (Vol. 32, pp. 991–999). Presented at the ICML: International Conference on Machine
    Learning, Beijing, China: ML Research Press.'
  chicago: Pentina, Anastasia, and Christoph Lampert. “A PAC-Bayesian Bound for Lifelong
    Learning,” 32:991–99. ML Research Press, 2014.
  ieee: 'A. Pentina and C. Lampert, “A PAC-Bayesian bound for Lifelong Learning,”
    presented at the ICML: International Conference on Machine Learning, Beijing,
    China, 2014, vol. 32, pp. 991–999.'
  ista: 'Pentina A, Lampert C. 2014. A PAC-Bayesian bound for Lifelong Learning. ICML:
    International Conference on Machine Learning vol. 32, 991–999.'
  mla: Pentina, Anastasia, and Christoph Lampert. <i>A PAC-Bayesian Bound for Lifelong
    Learning</i>. Vol. 32, ML Research Press, 2014, pp. 991–99.
  short: A. Pentina, C. Lampert, in:, ML Research Press, 2014, pp. 991–999.
conference:
  end_date: 2014-06-26
  location: Beijing, China
  name: 'ICML: International Conference on Machine Learning'
  start_date: 2014-06-21
date_created: 2018-12-11T11:56:03Z
date_published: 2014-05-10T00:00:00Z
date_updated: 2023-10-17T11:54:24Z
day: '10'
department:
- _id: ChLa
intvolume: '        32'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://dl.acm.org/citation.cfm?id=3045003
month: '05'
oa: 1
oa_version: Submitted Version
page: 991 - 999
publication_status: published
publisher: ML Research Press
publist_id: '4844'
quality_controlled: '1'
scopus_import: '1'
status: public
title: A PAC-Bayesian bound for Lifelong Learning
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 32
year: '2014'
...
---
_id: '2171'
abstract:
- lang: eng
  text: We present LS-CRF, a new method for training cyclic Conditional Random Fields
    (CRFs) from large datasets that is inspired by classical closed-form expressions
    for the maximum likelihood parameters of a generative graphical model with tree
    topology. Training a CRF with LS-CRF requires only solving a set of independent
    regression problems, each of which can be solved efficiently in closed form or
    by an iterative solver. This makes LS-CRF orders of magnitude faster than classical
    CRF training based on probabilistic inference, and at the same time more flexible
    and easier to implement than other approximate techniques, such as pseudolikelihood
    or piecewise training. We apply LS-CRF to the task of semantic image segmentation,
    showing that it achieves on par accuracy to other training techniques at higher
    speed, thereby allowing efficient CRF training from very large training sets.
    For example, training a linearly parameterized pairwise CRF on 150,000 images
    requires less than one hour on a modern workstation.
alternative_title:
- LNCS
author:
- first_name: Alexander
  full_name: Kolesnikov, Alexander
  id: 2D157DB6-F248-11E8-B48F-1D18A9856A87
  last_name: Kolesnikov
- first_name: Matthieu
  full_name: Guillaumin, Matthieu
  last_name: Guillaumin
- first_name: Vittorio
  full_name: Ferrari, Vittorio
  last_name: Ferrari
- first_name: Christoph
  full_name: Lampert, Christoph
  id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
  last_name: Lampert
  orcid: 0000-0001-8622-7887
citation:
  ama: 'Kolesnikov A, Guillaumin M, Ferrari V, Lampert C. Closed-form approximate
    CRF training for scalable image segmentation. In: Fleet D, Pajdla T, Schiele B,
    Tuytelaars T, eds. <i>Lecture Notes in Computer Science (Including Subseries Lecture
    Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)</i>. Vol
    8691. Springer; 2014:550-565. doi:<a href="https://doi.org/10.1007/978-3-319-10578-9_36">10.1007/978-3-319-10578-9_36</a>'
  apa: 'Kolesnikov, A., Guillaumin, M., Ferrari, V., &#38; Lampert, C. (2014). Closed-form
    approximate CRF training for scalable image segmentation. In D. Fleet, T. Pajdla,
    B. Schiele, &#38; T. Tuytelaars (Eds.), <i>Lecture Notes in Computer Science (including
    subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)</i>
    (Vol. 8691, pp. 550–565). Zurich, Switzerland: Springer. <a href="https://doi.org/10.1007/978-3-319-10578-9_36">https://doi.org/10.1007/978-3-319-10578-9_36</a>'
  chicago: Kolesnikov, Alexander, Matthieu Guillaumin, Vittorio Ferrari, and Christoph
    Lampert. “Closed-Form Approximate CRF Training for Scalable Image Segmentation.”
    In <i>Lecture Notes in Computer Science (Including Subseries Lecture Notes in
    Artificial Intelligence and Lecture Notes in Bioinformatics)</i>, edited by David
    Fleet, Tomas Pajdla, Bernt Schiele, and Tinne Tuytelaars, 8691:550–65. Springer,
    2014. <a href="https://doi.org/10.1007/978-3-319-10578-9_36">https://doi.org/10.1007/978-3-319-10578-9_36</a>.
  ieee: A. Kolesnikov, M. Guillaumin, V. Ferrari, and C. Lampert, “Closed-form approximate
    CRF training for scalable image segmentation,” in <i>Lecture Notes in Computer
    Science (including subseries Lecture Notes in Artificial Intelligence and Lecture
    Notes in Bioinformatics)</i>, Zurich, Switzerland, 2014, vol. 8691, no. PART 3,
    pp. 550–565.
  ista: 'Kolesnikov A, Guillaumin M, Ferrari V, Lampert C. 2014. Closed-form approximate
    CRF training for scalable image segmentation. Lecture Notes in Computer Science
    (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes
    in Bioinformatics). ECCV: European Conference on Computer Vision, LNCS, vol. 8691,
    550–565.'
  mla: Kolesnikov, Alexander, et al. “Closed-Form Approximate CRF Training for Scalable
    Image Segmentation.” <i>Lecture Notes in Computer Science (Including Subseries
    Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)</i>,
    edited by David Fleet et al., vol. 8691, no. PART 3, Springer, 2014, pp. 550–65,
    doi:<a href="https://doi.org/10.1007/978-3-319-10578-9_36">10.1007/978-3-319-10578-9_36</a>.
  short: A. Kolesnikov, M. Guillaumin, V. Ferrari, C. Lampert, in:, D. Fleet, T. Pajdla,
    B. Schiele, T. Tuytelaars (Eds.), Lecture Notes in Computer Science (Including
    Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics),
    Springer, 2014, pp. 550–565.
conference:
  end_date: 2014-09-12
  location: Zurich, Switzerland
  name: 'ECCV: European Conference on Computer Vision'
  start_date: 2014-09-06
date_created: 2018-12-11T11:56:07Z
date_published: 2014-09-01T00:00:00Z
date_updated: 2021-01-12T06:55:46Z
day: '01'
department:
- _id: ChLa
doi: 10.1007/978-3-319-10578-9_36
ec_funded: 1
editor:
- first_name: David
  full_name: Fleet, David
  last_name: Fleet
- first_name: Tomas
  full_name: Pajdla, Tomas
  last_name: Pajdla
- first_name: Bernt
  full_name: Schiele, Bernt
  last_name: Schiele
- first_name: Tinne
  full_name: Tuytelaars, Tinne
  last_name: Tuytelaars
intvolume: '      8691'
issue: PART 3
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: http://arxiv.org/abs/1403.7057
month: '09'
oa: 1
oa_version: Submitted Version
page: 550 - 565
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
  call_identifier: FP7
  grant_number: '308036'
  name: Lifelong Learning of Visual Scene Understanding
publication: Lecture Notes in Computer Science (including subseries Lecture Notes
  in Artificial Intelligence and Lecture Notes in Bioinformatics)
publication_status: published
publisher: Springer
publist_id: '4813'
quality_controlled: '1'
scopus_import: 1
status: public
title: Closed-form approximate CRF training for scalable image segmentation
type: conference
user_id: 4435EBFC-F248-11E8-B48F-1D18A9856A87
volume: 8691
year: '2014'
...
---
_id: '2172'
abstract:
- lang: eng
  text: Fisher Kernels and Deep Learning were two developments with significant impact
    on large-scale object categorization in the last years. Both approaches were shown
    to achieve state-of-the-art results on large-scale object categorization datasets,
    such as ImageNet. Conceptually, however, they are perceived as very different
    and it is not uncommon for heated debates to spring up when advocates of both
    paradigms meet at conferences or workshops. In this work, we emphasize the similarities
    between both architectures rather than their differences and we argue that such
    a unified view allows us to transfer ideas from one domain to the other. As a
    concrete example we introduce a method for learning a support vector machine classifier
    with Fisher kernel at the same time as a task-specific data representation. We
    reinterpret the setting as a multi-layer feed forward network. Its final layer
    is the classifier, parameterized by a weight vector, and the two previous layers
    compute Fisher vectors, parameterized by the coefficients of a Gaussian mixture
    model. We introduce a gradient descent based learning algorithm that, in contrast
    to other feature learning techniques, is not just derived from intuition or biological
    analogy, but has a theoretical justification in the framework of statistical learning
    theory. Our experiments show that the new training procedure leads to significant
    improvements in classification accuracy while preserving the modularity and geometric
    interpretability of a support vector machine setup.
author:
- first_name: Vladyslav
  full_name: Sydorov, Vladyslav
  last_name: Sydorov
- first_name: Mayu
  full_name: Sakurada, Mayu
  last_name: Sakurada
- first_name: Christoph
  full_name: Lampert, Christoph
  id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
  last_name: Lampert
  orcid: 0000-0001-8622-7887
citation:
  ama: 'Sydorov V, Sakurada M, Lampert C. Deep Fisher Kernels – End to end learning
    of the Fisher Kernel GMM parameters. In: <i>Proceedings of the IEEE Computer Society
    Conference on Computer Vision and Pattern Recognition</i>. IEEE; 2014:1402-1409.
    doi:<a href="https://doi.org/10.1109/CVPR.2014.182">10.1109/CVPR.2014.182</a>'
  apa: 'Sydorov, V., Sakurada, M., &#38; Lampert, C. (2014). Deep Fisher Kernels –
    End to end learning of the Fisher Kernel GMM parameters. In <i>Proceedings of
    the IEEE Computer Society Conference on Computer Vision and Pattern Recognition</i>
    (pp. 1402–1409). Columbus, USA: IEEE. <a href="https://doi.org/10.1109/CVPR.2014.182">https://doi.org/10.1109/CVPR.2014.182</a>'
  chicago: Sydorov, Vladyslav, Mayu Sakurada, and Christoph Lampert. “Deep Fisher
    Kernels – End to End Learning of the Fisher Kernel GMM Parameters.” In <i>Proceedings
    of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition</i>,
    1402–9. IEEE, 2014. <a href="https://doi.org/10.1109/CVPR.2014.182">https://doi.org/10.1109/CVPR.2014.182</a>.
  ieee: V. Sydorov, M. Sakurada, and C. Lampert, “Deep Fisher Kernels – End to end
    learning of the Fisher Kernel GMM parameters,” in <i>Proceedings of the IEEE Computer
    Society Conference on Computer Vision and Pattern Recognition</i>, Columbus, USA,
    2014, pp. 1402–1409.
  ista: 'Sydorov V, Sakurada M, Lampert C. 2014. Deep Fisher Kernels – End to end
    learning of the Fisher Kernel GMM parameters. Proceedings of the IEEE Computer
    Society Conference on Computer Vision and Pattern Recognition. CVPR: Computer
    Vision and Pattern Recognition, 1402–1409.'
  mla: Sydorov, Vladyslav, et al. “Deep Fisher Kernels – End to End Learning of the
    Fisher Kernel GMM Parameters.” <i>Proceedings of the IEEE Computer Society Conference
    on Computer Vision and Pattern Recognition</i>, IEEE, 2014, pp. 1402–09, doi:<a
    href="https://doi.org/10.1109/CVPR.2014.182">10.1109/CVPR.2014.182</a>.
  short: V. Sydorov, M. Sakurada, C. Lampert, in:, Proceedings of the IEEE Computer
    Society Conference on Computer Vision and Pattern Recognition, IEEE, 2014, pp.
    1402–1409.
conference:
  end_date: 2014-06-28
  location: Columbus, USA
  name: 'CVPR: Computer Vision and Pattern Recognition'
  start_date: 2014-06-23
date_created: 2018-12-11T11:56:08Z
date_published: 2014-09-24T00:00:00Z
date_updated: 2021-01-12T06:55:46Z
day: '24'
department:
- _id: ChLa
doi: 10.1109/CVPR.2014.182
ec_funded: 1
language:
- iso: eng
month: '09'
oa_version: None
page: 1402 - 1409
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
  call_identifier: FP7
  grant_number: '308036'
  name: Lifelong Learning of Visual Scene Understanding
publication: Proceedings of the IEEE Computer Society Conference on Computer Vision
  and Pattern Recognition
publication_status: published
publisher: IEEE
publist_id: '4812'
quality_controlled: '1'
scopus_import: 1
status: public
title: Deep Fisher Kernels – End to end learning of the Fisher Kernel GMM parameters
type: conference
user_id: 4435EBFC-F248-11E8-B48F-1D18A9856A87
year: '2014'
...
---
_id: '2173'
abstract:
- lang: eng
  text: "In this work we introduce a new approach to co-classification, i.e. the task
    of jointly classifying multiple, otherwise independent, data samples. The method
    we present, named CoConut, is based on the idea of adding a regularizer in the
    label space to encode certain priors on the resulting labelings. A regularizer
    that encourages labelings that are smooth across the test set, for instance, can
    be seen as a test-time variant of the cluster assumption, which has been proven
    useful at training time in semi-supervised learning. A regularizer that introduces
    a preference for certain class proportions can be regarded as a prior distribution
    on the class labels. CoConut can build on existing classifiers without making
    any assumptions on how they were obtained and without the need to re-train them.
    The use of a regularizer adds a new level of flexibility. It allows the integration
    of potentially new information at test time, even in other modalities than what
    the classifiers were trained on. We evaluate our framework on six datasets, reporting
    a clear performance gain in classification accuracy compared to the standard classification
    setup that predicts labels for each test sample separately.\r\n"
author:
- first_name: Sameh
  full_name: Khamis, Sameh
  last_name: Khamis
- first_name: Christoph
  full_name: Lampert, Christoph
  id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
  last_name: Lampert
  orcid: 0000-0001-8622-7887
citation:
  ama: 'Khamis S, Lampert C. CoConut: Co-classification with output space regularization.
    In: <i>Proceedings of the British Machine Vision Conference 2014</i>. BMVA Press;
    2014.'
  apa: 'Khamis, S., &#38; Lampert, C. (2014). CoConut: Co-classification with output
    space regularization. In <i>Proceedings of the British Machine Vision Conference
    2014</i>. Nottingham, UK: BMVA Press.'
  chicago: 'Khamis, Sameh, and Christoph Lampert. “CoConut: Co-Classification with
    Output Space Regularization.” In <i>Proceedings of the British Machine Vision
    Conference 2014</i>. BMVA Press, 2014.'
  ieee: 'S. Khamis and C. Lampert, “CoConut: Co-classification with output space regularization,”
    in <i>Proceedings of the British Machine Vision Conference 2014</i>, Nottingham,
    UK, 2014.'
  ista: 'Khamis S, Lampert C. 2014. CoConut: Co-classification with output space regularization.
    Proceedings of the British Machine Vision Conference 2014. BMVC: British Machine
    Vision Conference.'
  mla: 'Khamis, Sameh, and Christoph Lampert. “CoConut: Co-Classification with Output
    Space Regularization.” <i>Proceedings of the British Machine Vision Conference
    2014</i>, BMVA Press, 2014.'
  short: S. Khamis, C. Lampert, in:, Proceedings of the British Machine Vision Conference
    2014, BMVA Press, 2014.
conference:
  end_date: 2014-09-05
  location: Nottingham, UK
  name: 'BMVC: British Machine Vision Conference'
  start_date: 2014-09-01
date_created: 2018-12-11T11:56:08Z
date_published: 2014-09-01T00:00:00Z
date_updated: 2021-01-12T06:55:46Z
day: '01'
ddc:
- '000'
department:
- _id: ChLa
ec_funded: 1
file:
- access_level: open_access
  checksum: c4c6d3efdb8ee648faf3e76849839ce2
  content_type: application/pdf
  creator: system
  date_created: 2018-12-12T10:08:23Z
  date_updated: 2020-07-14T12:45:31Z
  file_id: '4683'
  file_name: IST-2016-490-v1+1_khamis-bmvc2014.pdf
  file_size: 408172
  relation: main_file
file_date_updated: 2020-07-14T12:45:31Z
has_accepted_license: '1'
language:
- iso: eng
month: '09'
oa: 1
oa_version: Published Version
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
  call_identifier: FP7
  grant_number: '308036'
  name: Lifelong Learning of Visual Scene Understanding
publication: Proceedings of the British Machine Vision Conference 2014
publication_status: published
publisher: BMVA Press
publist_id: '4811'
pubrep_id: '490'
quality_controlled: '1'
scopus_import: 1
status: public
title: 'CoConut: Co-classification with output space regularization'
type: conference
user_id: 3E5EF7F0-F248-11E8-B48F-1D18A9856A87
year: '2014'
...
---
_id: '2180'
abstract:
- lang: eng
  text: Weighted majority votes allow one to combine the output of several classifiers
    or voters. MinCq is a recent algorithm for optimizing the weight of each voter
    based on the minimization of a theoretical bound over the risk of the vote with
    elegant PAC-Bayesian generalization guarantees. However, while it has demonstrated
    good performance when combining weak classifiers, MinCq cannot make use of the
    useful a priori knowledge that one may have when using a mixture of weak and strong
    voters. In this paper, we propose P-MinCq, an extension of MinCq that can incorporate
    such knowledge in the form of a  constraint over the distribution of the weights,
    along with general proofs of convergence that stand in the sample compression
    setting for data-dependent voters. The approach is applied to a vote of k-NN classifiers
    with a specific modeling of the voters' performance. P-MinCq significantly outperforms
    the classic k-NN classifier, a symmetric NN and MinCq using the same voters. We
    show that it is also competitive with LMNN, a popular metric learning algorithm,
    and that combining both approaches further reduces the error.
acknowledgement: 'This work was funded by the French project SoLSTiCe ANR-13-BS02-01
  of the ANR. '
author:
- first_name: Aurélien
  full_name: Bellet, Aurélien
  last_name: Bellet
- first_name: Amaury
  full_name: Habrard, Amaury
  last_name: Habrard
- first_name: Emilie
  full_name: Morvant, Emilie
  id: 4BAC2A72-F248-11E8-B48F-1D18A9856A87
  last_name: Morvant
  orcid: 0000-0002-8301-7240
- first_name: Marc
  full_name: Sebban, Marc
  last_name: Sebban
citation:
  ama: Bellet A, Habrard A, Morvant E, Sebban M. Learning a priori constrained weighted
    majority votes. <i>Machine Learning</i>. 2014;97(1-2):129-154. doi:<a href="https://doi.org/10.1007/s10994-014-5462-z">10.1007/s10994-014-5462-z</a>
  apa: Bellet, A., Habrard, A., Morvant, E., &#38; Sebban, M. (2014). Learning a priori
    constrained weighted majority votes. <i>Machine Learning</i>. Springer. <a href="https://doi.org/10.1007/s10994-014-5462-z">https://doi.org/10.1007/s10994-014-5462-z</a>
  chicago: Bellet, Aurélien, Amaury Habrard, Emilie Morvant, and Marc Sebban. “Learning
    a Priori Constrained Weighted Majority Votes.” <i>Machine Learning</i>. Springer,
    2014. <a href="https://doi.org/10.1007/s10994-014-5462-z">https://doi.org/10.1007/s10994-014-5462-z</a>.
  ieee: A. Bellet, A. Habrard, E. Morvant, and M. Sebban, “Learning a priori constrained
    weighted majority votes,” <i>Machine Learning</i>, vol. 97, no. 1–2. Springer,
    pp. 129–154, 2014.
  ista: Bellet A, Habrard A, Morvant E, Sebban M. 2014. Learning a priori constrained
    weighted majority votes. Machine Learning. 97(1–2), 129–154.
  mla: Bellet, Aurélien, et al. “Learning a Priori Constrained Weighted Majority Votes.”
    <i>Machine Learning</i>, vol. 97, no. 1–2, Springer, 2014, pp. 129–54, doi:<a
    href="https://doi.org/10.1007/s10994-014-5462-z">10.1007/s10994-014-5462-z</a>.
  short: A. Bellet, A. Habrard, E. Morvant, M. Sebban, Machine Learning 97 (2014)
    129–154.
date_created: 2018-12-11T11:56:10Z
date_published: 2014-10-01T00:00:00Z
date_updated: 2021-01-12T06:55:49Z
day: '01'
department:
- _id: ChLa
doi: 10.1007/s10994-014-5462-z
ec_funded: 1
intvolume: '        97'
issue: 1-2
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://hal.archives-ouvertes.fr/hal-01009578/document
month: '10'
oa: 1
oa_version: Submitted Version
page: 129 - 154
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
  call_identifier: FP7
  grant_number: '308036'
  name: Lifelong Learning of Visual Scene Understanding
publication: Machine Learning
publication_status: published
publisher: Springer
publist_id: '4802'
quality_controlled: '1'
scopus_import: 1
status: public
title: Learning a priori constrained weighted majority votes
type: journal_article
user_id: 4435EBFC-F248-11E8-B48F-1D18A9856A87
volume: 97
year: '2014'
...
---
_id: '2189'
abstract:
- lang: fre
  text: En apprentissage automatique, nous parlons d'adaptation de domaine lorsque
    les données de test (cibles) et d'apprentissage (sources) sont générées selon
    différentes distributions. Nous devons donc développer des algorithmes de classification
    capables de s'adapter à une nouvelle distribution, pour laquelle aucune information
    sur les étiquettes n'est disponible. Nous attaquons cette problématique sous l'angle
    de l'approche PAC-Bayésienne qui se focalise sur l'apprentissage de modèles définis
    comme des votes de majorité sur un ensemble de fonctions. Dans ce contexte, nous
    introduisons PV-MinCq une version adaptative de l'algorithme (non adaptatif) MinCq.
    PV-MinCq suit le principe suivant. Nous transférons les étiquettes sources aux
    points cibles proches pour ensuite appliquer MinCq sur l'échantillon cible ``auto-étiqueté''
    (justifié par une borne théorique). Plus précisément, nous définissons un auto-étiquetage
    non itératif qui se focalise dans les régions où les distributions marginales
    source et cible sont les plus similaires. Dans un second temps, nous étudions
    l'influence de notre auto-étiquetage pour en déduire une procédure de validation
    des hyperparamètres. Finalement, notre approche montre des résultats empiriques
    prometteurs.
article_processing_charge: No
author:
- first_name: Emilie
  full_name: Morvant, Emilie
  id: 4BAC2A72-F248-11E8-B48F-1D18A9856A87
  last_name: Morvant
  orcid: 0000-0002-8301-7240
citation:
  ama: 'Morvant E. Adaptation de domaine de vote de majorité par auto-étiquetage non
    itératif. In: Vol 1. Elsevier; 2014:49-58.'
  apa: 'Morvant, E. (2014). Adaptation de domaine de vote de majorité par auto-étiquetage
    non itératif (Vol. 1, pp. 49–58). Presented at the CAP: Conférence Francophone
    sur l’Apprentissage Automatique (Machine Learning French Conference), Saint-Etienne,
    France: Elsevier.'
  chicago: Morvant, Emilie. “Adaptation de Domaine de Vote de Majorité Par Auto-Étiquetage
    Non Itératif,” 1:49–58. Elsevier, 2014.
  ieee: 'E. Morvant, “Adaptation de domaine de vote de majorité par auto-étiquetage
    non itératif,” presented at the CAP: Conférence Francophone sur l’Apprentissage
    Automatique (Machine Learning French Conference), Saint-Etienne, France, 2014,
    vol. 1, pp. 49–58.'
  ista: 'Morvant E. 2014. Adaptation de domaine de vote de majorité par auto-étiquetage
    non itératif. CAP: Conférence Francophone sur l’Apprentissage Automatique (Machine
    Learning French Conference) vol. 1, 49–58.'
  mla: Morvant, Emilie. <i>Adaptation de Domaine de Vote de Majorité Par Auto-Étiquetage
    Non Itératif</i>. Vol. 1, Elsevier, 2014, pp. 49–58.
  short: E. Morvant, in:, Elsevier, 2014, pp. 49–58.
conference:
  location: Saint-Etienne, France
  name: 'CAP: Conférence Francophone sur l''Apprentissage Automatique (Machine Learning
    French Conference)'
date_created: 2018-12-11T11:56:13Z
date_published: 2014-07-01T00:00:00Z
date_updated: 2021-01-12T06:55:52Z
day: '01'
department:
- _id: ChLa
intvolume: '         1'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://hal.archives-ouvertes.fr/hal-01005776/
month: '07'
oa: 1
oa_version: Preprint
page: 49-58
publication_status: published
publisher: Elsevier
publist_id: '4785'
quality_controlled: '1'
status: public
title: Adaptation de domaine de vote de majorité par auto-étiquetage non itératif
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 1
year: '2014'
...
---
_id: '2516'
abstract:
- lang: eng
  text: 'We study the problem of object recognition for categories for which we have
    no training examples, a task also called zero-data or zero-shot learning. This
    situation has hardly been studied in computer vision research, even though it
    occurs frequently: the world contains tens of thousands of different object classes
    and for only few of them image collections have been formed and suitably annotated.
    To tackle the problem we introduce attribute-based classification: objects are
    identified based on a high-level description that is phrased in terms of semantic
    attributes, such as the object''s color or shape. Because the identification of
    each such property transcends the specific learning task at hand, the attribute
    classifiers can be pre-learned independently, e.g. from existing image datasets
    unrelated to the current task. Afterwards, new classes can be detected based on
    their attribute representation, without the need for a new training phase. In
    this paper we also introduce a new dataset, Animals with Attributes, of over 30,000
    images of 50 animal classes, annotated with 85 semantic attributes. Extensive
    experiments on this and two more datasets show that attribute-based classification
    indeed is able to categorize images without access to any training images of the
    target classes.'
author:
- first_name: Christoph
  full_name: Lampert, Christoph
  id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
  last_name: Lampert
  orcid: 0000-0001-8622-7887
- first_name: Hannes
  full_name: Nickisch, Hannes
  last_name: Nickisch
- first_name: Stefan
  full_name: Harmeling, Stefan
  last_name: Harmeling
citation:
  ama: Lampert C, Nickisch H, Harmeling S. Attribute-based classification for zero-shot
    learning of object categories. <i>IEEE Transactions on Pattern Analysis and Machine
    Intelligence</i>. 2013;36(3):453-465. doi:<a href="https://doi.org/10.1109/TPAMI.2013.140">10.1109/TPAMI.2013.140</a>
  apa: Lampert, C., Nickisch, H., &#38; Harmeling, S. (2013). Attribute-based classification
    for zero-shot learning of object categories. <i>IEEE Transactions on Pattern Analysis
    and Machine Intelligence</i>. IEEE. <a href="https://doi.org/10.1109/TPAMI.2013.140">https://doi.org/10.1109/TPAMI.2013.140</a>
  chicago: Lampert, Christoph, Hannes Nickisch, and Stefan Harmeling. “Attribute-Based
    Classification for Zero-Shot Learning of Object Categories.” <i>IEEE Transactions
    on Pattern Analysis and Machine Intelligence</i>. IEEE, 2013. <a href="https://doi.org/10.1109/TPAMI.2013.140">https://doi.org/10.1109/TPAMI.2013.140</a>.
  ieee: C. Lampert, H. Nickisch, and S. Harmeling, “Attribute-based classification
    for zero-shot learning of object categories,” <i>IEEE Transactions on Pattern
    Analysis and Machine Intelligence</i>, vol. 36, no. 3. IEEE, pp. 453–465, 2013.
  ista: Lampert C, Nickisch H, Harmeling S. 2013. Attribute-based classification for
    zero-shot learning of object categories. IEEE Transactions on Pattern Analysis
    and Machine Intelligence. 36(3), 453–465.
  mla: Lampert, Christoph, et al. “Attribute-Based Classification for Zero-Shot Learning
    of Object Categories.” <i>IEEE Transactions on Pattern Analysis and Machine Intelligence</i>,
    vol. 36, no. 3, IEEE, 2013, pp. 453–65, doi:<a href="https://doi.org/10.1109/TPAMI.2013.140">10.1109/TPAMI.2013.140</a>.
  short: C. Lampert, H. Nickisch, S. Harmeling, IEEE Transactions on Pattern Analysis
    and Machine Intelligence 36 (2013) 453–465.
date_created: 2018-12-11T11:58:08Z
date_published: 2013-07-30T00:00:00Z
date_updated: 2021-01-12T06:57:58Z
day: '30'
department:
- _id: ChLa
doi: 10.1109/TPAMI.2013.140
intvolume: '        36'
issue: '3'
language:
- iso: eng
month: '07'
oa_version: None
page: 453 - 465
publication: IEEE Transactions on Pattern Analysis and Machine Intelligence
publication_status: published
publisher: IEEE
publist_id: '4385'
quality_controlled: '1'
scopus_import: 1
status: public
title: Attribute-based classification for zero-shot learning of object categories
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 36
year: '2013'
...
---
_id: '2520'
abstract:
- lang: eng
  text: "We propose a probabilistic model to infer supervised latent variables in\r\nthe
    Hamming space from observed data. Our model allows simultaneous\r\ninference of
    the number of binary latent variables, and their values. The\r\nlatent variables
    preserve neighbourhood structure of the data in a sense\r\nthat objects in the
    same semantic concept have similar latent values, and\r\nobjects in different
    concepts have dissimilar latent values. We formulate\r\nthe supervised infinite
    latent variable problem based on an intuitive\r\nprinciple of pulling objects
    together if they are of the same type, and\r\npushing them apart if they are not.
    We then combine this principle with a\r\nflexible Indian Buffet Process prior
    on the latent variables. We show that\r\nthe inferred supervised latent variables
    can be directly used to perform a\r\nnearest neighbour search for the purpose
    of retrieval.  We introduce a new\r\napplication of dynamically extending hash
    codes, and show how to\r\neffectively couple the structure of the hash codes with
    continuously\r\ngrowing structure of the neighbourhood preserving infinite latent
    feature\r\nspace."
author:
- first_name: Novi
  full_name: Quadrianto, Novi
  last_name: Quadrianto
- first_name: Viktoriia
  full_name: Sharmanska, Viktoriia
  id: 2EA6D09E-F248-11E8-B48F-1D18A9856A87
  last_name: Sharmanska
  orcid: 0000-0003-0192-9308
- first_name: David
  full_name: Knowles, David
  last_name: Knowles
- first_name: Zoubin
  full_name: Ghahramani, Zoubin
  last_name: Ghahramani
citation:
  ama: 'Quadrianto N, Sharmanska V, Knowles D, Ghahramani Z. The supervised IBP: Neighbourhood
    preserving infinite latent feature models. In: <i>Proceedings of the 29th Conference
    Uncertainty in Artificial Intelligence</i>. AUAI Press; 2013:527-536.'
  apa: 'Quadrianto, N., Sharmanska, V., Knowles, D., &#38; Ghahramani, Z. (2013).
    The supervised IBP: Neighbourhood preserving infinite latent feature models. In
    <i>Proceedings of the 29th conference uncertainty in Artificial Intelligence</i>
    (pp. 527–536). Bellevue, WA, United States: AUAI Press.'
  chicago: 'Quadrianto, Novi, Viktoriia Sharmanska, David Knowles, and Zoubin Ghahramani.
    “The Supervised IBP: Neighbourhood Preserving Infinite Latent Feature Models.”
    In <i>Proceedings of the 29th Conference Uncertainty in Artificial Intelligence</i>,
    527–36. AUAI Press, 2013.'
  ieee: 'N. Quadrianto, V. Sharmanska, D. Knowles, and Z. Ghahramani, “The supervised
    IBP: Neighbourhood preserving infinite latent feature models,” in <i>Proceedings
    of the 29th conference uncertainty in Artificial Intelligence</i>, Bellevue, WA,
    United States, 2013, pp. 527–536.'
  ista: 'Quadrianto N, Sharmanska V, Knowles D, Ghahramani Z. 2013. The supervised
    IBP: Neighbourhood preserving infinite latent feature models. Proceedings of the
    29th conference uncertainty in Artificial Intelligence. UAI: Uncertainty in Artificial
    Intelligence, 527–536.'
  mla: 'Quadrianto, Novi, et al. “The Supervised IBP: Neighbourhood Preserving Infinite
    Latent Feature Models.” <i>Proceedings of the 29th Conference Uncertainty in Artificial
    Intelligence</i>, AUAI Press, 2013, pp. 527–36.'
  short: N. Quadrianto, V. Sharmanska, D. Knowles, Z. Ghahramani, in:, Proceedings
    of the 29th Conference Uncertainty in Artificial Intelligence, AUAI Press, 2013,
    pp. 527–536.
conference:
  end_date: 2013-07-15
  location: Bellevue, WA, United States
  name: 'UAI: Uncertainty in Artificial Intelligence'
  start_date: 2013-07-11
date_created: 2018-12-11T11:58:09Z
date_published: 2013-07-11T00:00:00Z
date_updated: 2023-02-23T10:46:36Z
day: '11'
ddc:
- '000'
department:
- _id: ChLa
file:
- access_level: open_access
  checksum: 325f20c4b926bd74d39006b97df572bd
  content_type: application/pdf
  creator: system
  date_created: 2018-12-12T10:15:16Z
  date_updated: 2020-07-14T12:45:42Z
  file_id: '5134'
  file_name: IST-2013-137-v1+1_QuaShaKnoGha13.pdf
  file_size: 1117100
  relation: main_file
file_date_updated: 2020-07-14T12:45:42Z
has_accepted_license: '1'
language:
- iso: eng
month: '07'
oa: 1
oa_version: Submitted Version
page: 527 - 536
publication: Proceedings of the 29th conference uncertainty in Artificial Intelligence
publication_identifier:
  isbn:
  - '9780974903996'
publication_status: published
publisher: AUAI Press
publist_id: '4381'
pubrep_id: '137'
quality_controlled: '1'
scopus_import: 1
status: public
title: 'The supervised IBP: Neighbourhood preserving infinite latent feature models'
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2013'
...
---
_id: '2901'
abstract:
- lang: eng
  text: ' We introduce the M-modes problem for graphical models: predicting the M
    label configurations of highest probability that are at the same time local maxima
    of the probability landscape. M-modes have multiple possible applications: because
    they are intrinsically diverse, they provide a principled alternative to non-maximum
    suppression techniques for structured prediction, they can act as codebook vectors
    for quantizing the configuration space, or they can form component centers for
    mixture model approximation. We present two algorithms for solving the M-modes
    problem. The first algorithm solves the problem in polynomial time when the underlying
    graphical model is a simple chain. The second algorithm solves the problem for
    junction chains. In synthetic and real dataset, we demonstrate how M-modes can
    improve the performance of prediction. We also use the generated modes as a tool
    to understand the topography of the probability distribution of configurations,
    for example with relation to the training set size and amount of noise in the
    data. '
alternative_title:
- ' JMLR: W&CP'
author:
- first_name: Chao
  full_name: Chen, Chao
  id: 3E92416E-F248-11E8-B48F-1D18A9856A87
  last_name: Chen
- first_name: Vladimir
  full_name: Kolmogorov, Vladimir
  id: 3D50B0BA-F248-11E8-B48F-1D18A9856A87
  last_name: Kolmogorov
- first_name: Zhu
  full_name: Yan, Zhu
  last_name: Yan
- first_name: Dimitris
  full_name: Metaxas, Dimitris
  last_name: Metaxas
- first_name: Christoph
  full_name: Lampert, Christoph
  id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
  last_name: Lampert
  orcid: 0000-0001-8622-7887
citation:
  ama: 'Chen C, Kolmogorov V, Yan Z, Metaxas D, Lampert C. Computing the M most probable
    modes of a graphical model. In: Vol 31. JMLR; 2013:161-169.'
  apa: 'Chen, C., Kolmogorov, V., Yan, Z., Metaxas, D., &#38; Lampert, C. (2013).
    Computing the M most probable modes of a graphical model (Vol. 31, pp. 161–169).
    Presented at the  AISTATS: Conference on Uncertainty in Artificial Intelligence,
    Scottsdale, AZ, United States: JMLR.'
  chicago: Chen, Chao, Vladimir Kolmogorov, Zhu Yan, Dimitris Metaxas, and Christoph
    Lampert. “Computing the M Most Probable Modes of a Graphical Model,” 31:161–69.
    JMLR, 2013.
  ieee: 'C. Chen, V. Kolmogorov, Z. Yan, D. Metaxas, and C. Lampert, “Computing the
    M most probable modes of a graphical model,” presented at the  AISTATS: Conference
    on Uncertainty in Artificial Intelligence, Scottsdale, AZ, United States, 2013,
    vol. 31, pp. 161–169.'
  ista: 'Chen C, Kolmogorov V, Yan Z, Metaxas D, Lampert C. 2013. Computing the M
    most probable modes of a graphical model.  AISTATS: Conference on Uncertainty
    in Artificial Intelligence,  JMLR: W&#38;CP, vol. 31, 161–169.'
  mla: Chen, Chao, et al. <i>Computing the M Most Probable Modes of a Graphical Model</i>.
    Vol. 31, JMLR, 2013, pp. 161–69.
  short: C. Chen, V. Kolmogorov, Z. Yan, D. Metaxas, C. Lampert, in:, JMLR, 2013,
    pp. 161–169.
conference:
  end_date: 2013-05-01
  location: Scottsdale, AZ, United States
  name: ' AISTATS: Conference on Uncertainty in Artificial Intelligence'
  start_date: 2013-04-29
date_created: 2018-12-11T12:00:14Z
date_published: 2013-01-01T00:00:00Z
date_updated: 2021-01-12T07:00:35Z
day: '01'
department:
- _id: HeEd
- _id: VlKo
- _id: ChLa
intvolume: '        31'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: http://jmlr.org/proceedings/papers/v31/chen13a.html
month: '01'
oa: 1
oa_version: None
page: 161 - 169
publication_status: published
publisher: JMLR
publist_id: '3846'
quality_controlled: '1'
scopus_import: 1
status: public
title: Computing the M most probable modes of a graphical model
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 31
year: '2013'
...
---
_id: '2948'
abstract:
- lang: eng
  text: 'Many visual datasets are traditionally used to analyze the performance of
    different learning techniques. The evaluation is usually done within each dataset,
    therefore it is questionable if such results are a reliable indicator of true
    generalization ability. We propose here an algorithm to exploit the existing data
    resources when learning on a new multiclass problem. Our main idea is to identify
    an image representation that decomposes orthogonally into two subspaces: a part
    specific to each dataset, and a part generic to, and therefore shared between,
    all the considered source sets. This allows us to use the generic representation
    as un-biased reference knowledge for a novel classification task. By casting the
    method in the multi-view setting, we also make it possible to use different features
    for different databases. We call the algorithm MUST, Multitask Unaligned Shared
    knowledge Transfer. Through extensive experiments on five public datasets, we
    show that MUST consistently improves the cross-datasets generalization performance.'
acknowledgement: This work was supported by the PASCAL 2 Network of Excellence (TT)
  and by the Newton International Fellowship (NQ)
alternative_title:
- LNCS
author:
- first_name: Tatiana
  full_name: Tommasi, Tatiana
  last_name: Tommasi
- first_name: Novi
  full_name: Quadrianto, Novi
  last_name: Quadrianto
- first_name: Barbara
  full_name: Caputo, Barbara
  last_name: Caputo
- first_name: Christoph
  full_name: Lampert, Christoph
  id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
  last_name: Lampert
  orcid: 0000-0001-8622-7887
citation:
  ama: 'Tommasi T, Quadrianto N, Caputo B, Lampert C. Beyond dataset bias: Multi-task
    unaligned shared knowledge transfer. 2013;7724:1-15. doi:<a href="https://doi.org/10.1007/978-3-642-37331-2_1">10.1007/978-3-642-37331-2_1</a>'
  apa: 'Tommasi, T., Quadrianto, N., Caputo, B., &#38; Lampert, C. (2013). Beyond
    dataset bias: Multi-task unaligned shared knowledge transfer. Presented at the
    ACCV: Asian Conference on Computer Vision, Daejeon, Korea: Springer. <a href="https://doi.org/10.1007/978-3-642-37331-2_1">https://doi.org/10.1007/978-3-642-37331-2_1</a>'
  chicago: 'Tommasi, Tatiana, Novi Quadrianto, Barbara Caputo, and Christoph Lampert.
    “Beyond Dataset Bias: Multi-Task Unaligned Shared Knowledge Transfer.” Lecture
    Notes in Computer Science. Springer, 2013. <a href="https://doi.org/10.1007/978-3-642-37331-2_1">https://doi.org/10.1007/978-3-642-37331-2_1</a>.'
  ieee: 'T. Tommasi, N. Quadrianto, B. Caputo, and C. Lampert, “Beyond dataset bias:
    Multi-task unaligned shared knowledge transfer,” vol. 7724. Springer, pp. 1–15,
    2013.'
  ista: 'Tommasi T, Quadrianto N, Caputo B, Lampert C. 2013. Beyond dataset bias:
    Multi-task unaligned shared knowledge transfer. 7724, 1–15.'
  mla: 'Tommasi, Tatiana, et al. <i>Beyond Dataset Bias: Multi-Task Unaligned Shared
    Knowledge Transfer</i>. Vol. 7724, Springer, 2013, pp. 1–15, doi:<a href="https://doi.org/10.1007/978-3-642-37331-2_1">10.1007/978-3-642-37331-2_1</a>.'
  short: T. Tommasi, N. Quadrianto, B. Caputo, C. Lampert, 7724 (2013) 1–15.
conference:
  end_date: 2012-11-09
  location: Daejeon, Korea
  name: 'ACCV: Asian Conference on Computer Vision'
  start_date: 2012-11-05
date_created: 2018-12-11T12:00:30Z
date_published: 2013-04-04T00:00:00Z
date_updated: 2020-08-11T10:09:54Z
day: '04'
ddc:
- '000'
department:
- _id: ChLa
doi: 10.1007/978-3-642-37331-2_1
file:
- access_level: open_access
  checksum: a0a7234a89e2192af655b0d0ae3bf445
  content_type: application/pdf
  creator: dernst
  date_created: 2019-01-22T14:03:11Z
  date_updated: 2020-07-14T12:45:55Z
  file_id: '5874'
  file_name: 2012_ACCV_Tommasi.pdf
  file_size: 1513620
  relation: main_file
file_date_updated: 2020-07-14T12:45:55Z
has_accepted_license: '1'
intvolume: '      7724'
language:
- iso: eng
month: '04'
oa: 1
oa_version: Submitted Version
page: 1 - 15
publication_status: published
publisher: Springer
publist_id: '3784'
quality_controlled: '1'
scopus_import: 1
series_title: Lecture Notes in Computer Science
status: public
title: 'Beyond dataset bias: Multi-task unaligned shared knowledge transfer'
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 7724
year: '2013'
...
---
_id: '3321'
author:
- first_name: Novi
  full_name: Quadrianto, Novi
  last_name: Quadrianto
- first_name: Christoph
  full_name: Lampert, Christoph
  id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
  last_name: Lampert
  orcid: 0000-0001-8622-7887
citation:
  ama: 'Quadrianto N, Lampert C. Kernel based learning. In: Dubitzky W, Wolkenhauer
    O, Cho K, Yokota H, eds. <i>Encyclopedia of Systems Biology</i>. Vol 3. Springer;
    2013:1069-1069. doi:<a href="https://doi.org/10.1007/978-1-4419-9863-7_604">10.1007/978-1-4419-9863-7_604</a>'
  apa: Quadrianto, N., &#38; Lampert, C. (2013). Kernel based learning. In W. Dubitzky,
    O. Wolkenhauer, K. Cho, &#38; H. Yokota (Eds.), <i>Encyclopedia of Systems Biology</i>
    (Vol. 3, pp. 1069–1069). Springer. <a href="https://doi.org/10.1007/978-1-4419-9863-7_604">https://doi.org/10.1007/978-1-4419-9863-7_604</a>
  chicago: Quadrianto, Novi, and Christoph Lampert. “Kernel Based Learning.” In <i>Encyclopedia
    of Systems Biology</i>, edited by Werner Dubitzky, Olaf Wolkenhauer, Kwang Cho,
    and Hiroki Yokota, 3:1069–1069. Springer, 2013. <a href="https://doi.org/10.1007/978-1-4419-9863-7_604">https://doi.org/10.1007/978-1-4419-9863-7_604</a>.
  ieee: N. Quadrianto and C. Lampert, “Kernel based learning,” in <i>Encyclopedia
    of Systems Biology</i>, vol. 3, W. Dubitzky, O. Wolkenhauer, K. Cho, and H. Yokota,
    Eds. Springer, 2013, pp. 1069–1069.
  ista: 'Quadrianto N, Lampert C. 2013.Kernel based learning. In: Encyclopedia of
    Systems Biology. vol. 3, 1069–1069.'
  mla: Quadrianto, Novi, and Christoph Lampert. “Kernel Based Learning.” <i>Encyclopedia
    of Systems Biology</i>, edited by Werner Dubitzky et al., vol. 3, Springer, 2013,
    pp. 1069–1069, doi:<a href="https://doi.org/10.1007/978-1-4419-9863-7_604">10.1007/978-1-4419-9863-7_604</a>.
  short: N. Quadrianto, C. Lampert, in:, W. Dubitzky, O. Wolkenhauer, K. Cho, H. Yokota
    (Eds.), Encyclopedia of Systems Biology, Springer, 2013, pp. 1069–1069.
date_created: 2018-12-11T12:02:39Z
date_published: 2013-01-01T00:00:00Z
date_updated: 2021-01-12T07:42:38Z
day: '01'
department:
- _id: ChLa
doi: 10.1007/978-1-4419-9863-7_604
editor:
- first_name: Werner
  full_name: Dubitzky, Werner
  last_name: Dubitzky
- first_name: Olaf
  full_name: Wolkenhauer, Olaf
  last_name: Wolkenhauer
- first_name: Kwang
  full_name: Cho, Kwang
  last_name: Cho
- first_name: Hiroki
  full_name: Yokota, Hiroki
  last_name: Yokota
intvolume: '         3'
language:
- iso: eng
month: '01'
oa_version: None
page: 1069 - 1069
publication: Encyclopedia of Systems Biology
publication_status: published
publisher: Springer
publist_id: '3314'
quality_controlled: '1'
status: public
title: Kernel based learning
type: encyclopedia_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 3
year: '2013'
...
---
_id: '2293'
abstract:
- lang: eng
  text: Many computer vision problems have an asymmetric distribution of information
    between training and test time. In this work, we study the case where we are given
    additional information about the training data, which however will not be available
    at test time. This situation is called learning using privileged information (LUPI).
    We introduce two maximum-margin techniques that are able to make use of this additional
    source of information, and we show that the framework is applicable to several
    scenarios that have been studied in computer vision before. Experiments with attributes,
    bounding boxes, image tags and rationales as additional information in object
    classification show promising results.
author:
- first_name: Viktoriia
  full_name: Sharmanska, Viktoriia
  id: 2EA6D09E-F248-11E8-B48F-1D18A9856A87
  last_name: Sharmanska
  orcid: 0000-0003-0192-9308
- first_name: Novi
  full_name: Quadrianto, Novi
  last_name: Quadrianto
- first_name: Christoph
  full_name: Lampert, Christoph
  id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
  last_name: Lampert
  orcid: 0000-0001-8622-7887
citation:
  ama: 'Sharmanska V, Quadrianto N, Lampert C. Learning to rank using privileged information.
    In: IEEE; 2013:825-832. doi:<a href="https://doi.org/10.1109/ICCV.2013.107">10.1109/ICCV.2013.107</a>'
  apa: 'Sharmanska, V., Quadrianto, N., &#38; Lampert, C. (2013). Learning to rank
    using privileged information (pp. 825–832). Presented at the ICCV: International
    Conference on Computer Vision, Sydney, Australia: IEEE. <a href="https://doi.org/10.1109/ICCV.2013.107">https://doi.org/10.1109/ICCV.2013.107</a>'
  chicago: Sharmanska, Viktoriia, Novi Quadrianto, and Christoph Lampert. “Learning
    to Rank Using Privileged Information,” 825–32. IEEE, 2013. <a href="https://doi.org/10.1109/ICCV.2013.107">https://doi.org/10.1109/ICCV.2013.107</a>.
  ieee: 'V. Sharmanska, N. Quadrianto, and C. Lampert, “Learning to rank using privileged
    information,” presented at the ICCV: International Conference on Computer Vision,
    Sydney, Australia, 2013, pp. 825–832.'
  ista: 'Sharmanska V, Quadrianto N, Lampert C. 2013. Learning to rank using privileged
    information. ICCV: International Conference on Computer Vision, 825–832.'
  mla: Sharmanska, Viktoriia, et al. <i>Learning to Rank Using Privileged Information</i>.
    IEEE, 2013, pp. 825–32, doi:<a href="https://doi.org/10.1109/ICCV.2013.107">10.1109/ICCV.2013.107</a>.
  short: V. Sharmanska, N. Quadrianto, C. Lampert, in:, IEEE, 2013, pp. 825–832.
conference:
  end_date: 2013-12-08
  location: Sydney, Australia
  name: 'ICCV: International Conference on Computer Vision'
  start_date: 2013-12-01
date_created: 2018-12-11T11:56:49Z
date_published: 2013-12-01T00:00:00Z
date_updated: 2023-02-23T10:36:41Z
day: '01'
department:
- _id: ChLa
doi: 10.1109/ICCV.2013.107
ec_funded: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: www.cv-foundation.org/openaccess/content_iccv_2013/papers/Sharmanska_Learning_to_Rank_2013_ICCV_paper.pdf
month: '12'
oa: 1
oa_version: Submitted Version
page: 825 - 832
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
  call_identifier: FP7
  grant_number: '308036'
  name: Lifelong Learning of Visual Scene Understanding
publication_status: published
publisher: IEEE
publist_id: '4635'
quality_controlled: '1'
scopus_import: 1
status: public
title: Learning to rank using privileged information
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2013'
...
---
_id: '2294'
abstract:
- lang: eng
  text: "In this work we propose a system for automatic classification of Drosophila
    embryos into developmental stages.\r\nWhile the system is designed to solve an
    actual problem in biological research, we believe that the principle underly-\r\ning
    it is interesting not only for biologists, but also for researchers in computer
    vision. The main idea is to combine two orthogonal sources of information:  one
    is a classifier trained on strongly invariant features,  which makes it applicable
    to images of very different conditions, but also leads to rather noisy predictions.
    The other is a label propagation step based on a more powerful similarity measure
    that however is only consistent within specific subsets of the data at a time.\r\nIn
    our biological setup, the information sources are the shape and the staining patterns
    of embryo images. We show\r\nexperimentally  that  while  neither  of  the  methods
    \ can  be used by itself to achieve satisfactory results, their combina-\r\ntion
    achieves prediction quality comparable to human performance."
author:
- first_name: Tomas
  full_name: Kazmar, Tomas
  last_name: Kazmar
- first_name: Evgeny
  full_name: Kvon, Evgeny
  last_name: Kvon
- first_name: Alexander
  full_name: Stark, Alexander
  last_name: Stark
- first_name: Christoph
  full_name: Lampert, Christoph
  id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
  last_name: Lampert
  orcid: 0000-0001-8622-7887
citation:
  ama: 'Kazmar T, Kvon E, Stark A, Lampert C. Drosophila Embryo Stage Annotation using
    Label Propagation. In: IEEE; 2013. doi:<a href="https://doi.org/10.1109/ICCV.2013.139">10.1109/ICCV.2013.139</a>'
  apa: 'Kazmar, T., Kvon, E., Stark, A., &#38; Lampert, C. (2013). Drosophila Embryo
    Stage Annotation using Label Propagation. Presented at the ICCV: International
    Conference on Computer Vision, Sydney, Australia: IEEE. <a href="https://doi.org/10.1109/ICCV.2013.139">https://doi.org/10.1109/ICCV.2013.139</a>'
  chicago: Kazmar, Tomas, Evgeny Kvon, Alexander Stark, and Christoph Lampert. “Drosophila
    Embryo Stage Annotation Using Label Propagation.” IEEE, 2013. <a href="https://doi.org/10.1109/ICCV.2013.139">https://doi.org/10.1109/ICCV.2013.139</a>.
  ieee: 'T. Kazmar, E. Kvon, A. Stark, and C. Lampert, “Drosophila Embryo Stage Annotation
    using Label Propagation,” presented at the ICCV: International Conference on Computer
    Vision, Sydney, Australia, 2013.'
  ista: 'Kazmar T, Kvon E, Stark A, Lampert C. 2013. Drosophila Embryo Stage Annotation
    using Label Propagation. ICCV: International Conference on Computer Vision.'
  mla: Kazmar, Tomas, et al. <i>Drosophila Embryo Stage Annotation Using Label Propagation</i>.
    IEEE, 2013, doi:<a href="https://doi.org/10.1109/ICCV.2013.139">10.1109/ICCV.2013.139</a>.
  short: T. Kazmar, E. Kvon, A. Stark, C. Lampert, in:, IEEE, 2013.
conference:
  end_date: 2013-12-08
  location: Sydney, Australia
  name: 'ICCV: International Conference on Computer Vision'
  start_date: 2013-12-01
date_created: 2018-12-11T11:56:49Z
date_published: 2013-12-01T00:00:00Z
date_updated: 2021-01-12T06:56:35Z
day: '01'
department:
- _id: ChLa
doi: 10.1109/ICCV.2013.139
ec_funded: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: http://www.cv-foundation.org/openaccess/ICCV2013.py
month: '12'
oa: 1
oa_version: Submitted Version
project:
- _id: 2532554C-B435-11E9-9278-68D0E5697425
  call_identifier: FP7
  grant_number: '308036'
  name: Lifelong Learning of Visual Scene Understanding
publication_status: published
publisher: IEEE
publist_id: '4634'
quality_controlled: '1'
scopus_import: 1
status: public
title: Drosophila Embryo Stage Annotation using Label Propagation
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2013'
...
---
_id: '2825'
abstract:
- lang: eng
  text: 'We study the problem of maximum marginal prediction (MMP) in probabilistic
    graphical models, a task that occurs, for example, as the Bayes optimal decision
    rule under a Hamming loss. MMP is typically performed as a two-stage procedure:
    one estimates each variable''s marginal probability and then forms a prediction
    from the states of maximal probability. In this work we propose a simple yet effective
    technique for accelerating MMP when inference is sampling-based: instead of the
    above two-stage procedure we directly estimate the posterior probability of each
    decision variable. This allows us to identify the point of time when we are sufficiently
    certain about any individual decision. Whenever this is the case, we dynamically
    prune the variables we are confident about from the underlying factor graph. Consequently,
    at any time only samples of variables whose decision is still uncertain need to
    be created. Experiments in two prototypical scenarios, multi-label classification
    and image inpainting, show that adaptive sampling can drastically accelerate MMP
    without sacrificing prediction accuracy.'
author:
- first_name: Christoph
  full_name: Lampert, Christoph
  id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
  last_name: Lampert
  orcid: 0000-0001-8622-7887
citation:
  ama: 'Lampert C. Dynamic pruning of factor graphs for maximum marginal prediction.
    In: Vol 1. Neural Information Processing Systems; 2012:82-90.'
  apa: 'Lampert, C. (2012). Dynamic pruning of factor graphs for maximum marginal
    prediction (Vol. 1, pp. 82–90). Presented at the NIPS: Neural Information Processing
    Systems, Lake Tahoe, NV, United States: Neural Information Processing Systems.'
  chicago: Lampert, Christoph. “Dynamic Pruning of Factor Graphs for Maximum Marginal
    Prediction,” 1:82–90. Neural Information Processing Systems, 2012.
  ieee: 'C. Lampert, “Dynamic pruning of factor graphs for maximum marginal prediction,”
    presented at the NIPS: Neural Information Processing Systems, Lake Tahoe, NV,
    United States, 2012, vol. 1, pp. 82–90.'
  ista: 'Lampert C. 2012. Dynamic pruning of factor graphs for maximum marginal prediction.
    NIPS: Neural Information Processing Systems vol. 1, 82–90.'
  mla: Lampert, Christoph. <i>Dynamic Pruning of Factor Graphs for Maximum Marginal
    Prediction</i>. Vol. 1, Neural Information Processing Systems, 2012, pp. 82–90.
  short: C. Lampert, in:, Neural Information Processing Systems, 2012, pp. 82–90.
conference:
  end_date: 2012-12-06
  location: Lake Tahoe, NV, United States
  name: 'NIPS: Neural Information Processing Systems'
  start_date: 2012-12-03
date_created: 2018-12-11T11:59:48Z
date_published: 2012-12-01T00:00:00Z
date_updated: 2021-01-12T06:59:59Z
day: '01'
department:
- _id: ChLa
intvolume: '         1'
language:
- iso: eng
month: '12'
oa_version: None
page: 82 - 90
publication_status: published
publisher: Neural Information Processing Systems
publist_id: '3975'
quality_controlled: '1'
scopus_import: 1
status: public
title: Dynamic pruning of factor graphs for maximum marginal prediction
type: conference
user_id: 3E5EF7F0-F248-11E8-B48F-1D18A9856A87
volume: 1
year: '2012'
...
---
_id: '2915'
acknowledgement: "The project receives funding from the European Community’s Seventh
  Framework Programme under grant agreement\r\nno. ICT- 248273 GeRT."
article_processing_charge: No
author:
- first_name: Oliver
  full_name: Kroemer, Oliver
  last_name: Kroemer
- first_name: Christoph
  full_name: Lampert, Christoph
  id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
  last_name: Lampert
  orcid: 0000-0001-8622-7887
- first_name: Jan
  full_name: Peters, Jan
  last_name: Peters
citation:
  ama: 'Kroemer O, Lampert C, Peters J. Multi-modal learning for dynamic tactile sensing.
    In: Deutsches Zentrum für Luft und Raumfahrt; 2012.'
  apa: Kroemer, O., Lampert, C., &#38; Peters, J. (2012). Multi-modal learning for
    dynamic tactile sensing. Deutsches Zentrum für Luft und Raumfahrt.
  chicago: Kroemer, Oliver, Christoph Lampert, and Jan Peters. “Multi-Modal Learning
    for Dynamic Tactile Sensing.” Deutsches Zentrum für Luft und Raumfahrt, 2012.
  ieee: O. Kroemer, C. Lampert, and J. Peters, “Multi-modal learning for dynamic tactile
    sensing,” 2012.
  ista: Kroemer O, Lampert C, Peters J. 2012. Multi-modal learning for dynamic tactile
    sensing
  mla: Kroemer, Oliver, et al. <i>Multi-Modal Learning for Dynamic Tactile Sensing</i>.
    Deutsches Zentrum für Luft und Raumfahrt, 2012.
  short: O. Kroemer, C. Lampert, J. Peters, in:, Deutsches Zentrum für Luft und Raumfahrt,
    2012.
date_created: 2018-12-11T12:00:19Z
date_published: 2012-10-11T00:00:00Z
date_updated: 2023-10-17T07:58:59Z
day: '11'
department:
- _id: ChLa
language:
- iso: eng
month: '10'
oa_version: None
publication_status: published
publisher: Deutsches Zentrum für Luft und Raumfahrt
publist_id: '3828'
quality_controlled: '1'
status: public
title: Multi-modal learning for dynamic tactile sensing
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2012'
...
---
_id: '3124'
abstract:
- lang: eng
  text: "We consider the problem of inference in a graphical model with binary variables.
    While in theory it is arguably preferable to compute marginal probabilities, in
    practice researchers often use MAP inference due to the availability of efficient
    discrete optimization algorithms. We bridge the gap between the two approaches
    by introducing the Discrete Marginals technique in which approximate marginals
    are obtained by minimizing an objective function with unary and pairwise terms
    over a discretized domain. This allows the use of techniques originally developed
    for MAP-MRF inference and learning. We explore two ways to set up the objective
    function - by discretizing the Bethe free energy and by learning it from training
    data. Experimental results show that for certain types of graphs a learned function
    can outperform the Bethe approximation. We also establish a link between the Bethe
    free energy and submodular functions.\r\n"
alternative_title:
- Inferning 2012
author:
- first_name: Filip
  full_name: Korc, Filip
  id: 476A2FD6-F248-11E8-B48F-1D18A9856A87
  last_name: Korc
- first_name: Vladimir
  full_name: Kolmogorov, Vladimir
  id: 3D50B0BA-F248-11E8-B48F-1D18A9856A87
  last_name: Kolmogorov
- first_name: Christoph
  full_name: Lampert, Christoph
  id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
  last_name: Lampert
  orcid: 0000-0001-8622-7887
citation:
  ama: 'Korc F, Kolmogorov V, Lampert C. Approximating marginals using discrete energy
    minimization. In: ICML; 2012.'
  apa: 'Korc, F., Kolmogorov, V., &#38; Lampert, C. (2012). Approximating marginals
    using discrete energy minimization. Presented at the ICML: International Conference
    on Machine Learning, Edinburgh, Scotland: ICML.'
  chicago: Korc, Filip, Vladimir Kolmogorov, and Christoph Lampert. “Approximating
    Marginals Using Discrete Energy Minimization.” ICML, 2012.
  ieee: 'F. Korc, V. Kolmogorov, and C. Lampert, “Approximating marginals using discrete
    energy minimization,” presented at the ICML: International Conference on Machine
    Learning, Edinburgh, Scotland, 2012.'
  ista: 'Korc F, Kolmogorov V, Lampert C. 2012. Approximating marginals using discrete
    energy minimization. ICML: International Conference on Machine Learning, Inferning
    2012, .'
  mla: Korc, Filip, et al. <i>Approximating Marginals Using Discrete Energy Minimization</i>.
    ICML, 2012.
  short: F. Korc, V. Kolmogorov, C. Lampert, in:, ICML, 2012.
conference:
  end_date: 2012-07-01
  location: Edinburgh, Scotland
  name: 'ICML: International Conference on Machine Learning'
  start_date: 2012-06-26
date_created: 2018-12-11T12:01:31Z
date_published: 2012-06-30T00:00:00Z
date_updated: 2023-02-23T12:24:24Z
day: '30'
ddc:
- '000'
department:
- _id: ChLa
- _id: VlKo
file:
- access_level: open_access
  checksum: 3d0d4246548c736857302aadb2ff5d15
  content_type: application/pdf
  creator: system
  date_created: 2018-12-12T10:11:34Z
  date_updated: 2020-07-14T12:46:00Z
  file_id: '4889'
  file_name: IST-2016-565-v1+1_DM-inferning2012.pdf
  file_size: 305836
  relation: main_file
file_date_updated: 2020-07-14T12:46:00Z
has_accepted_license: '1'
language:
- iso: eng
month: '06'
oa: 1
oa_version: Submitted Version
publication_status: published
publisher: ICML
publist_id: '3575'
pubrep_id: '565'
quality_controlled: '1'
related_material:
  record:
  - id: '5396'
    relation: later_version
    status: public
status: public
title: Approximating marginals using discrete energy minimization
type: conference
user_id: 3E5EF7F0-F248-11E8-B48F-1D18A9856A87
year: '2012'
...
---
_id: '3125'
abstract:
- lang: eng
  text: We propose a new learning method to infer a mid-level feature representation
    that combines the advantage of semantic attribute representations with the higher
    expressive power of non-semantic features. The idea lies in augmenting an existing
    attribute-based representation with additional dimensions for which an autoencoder
    model is coupled with a large-margin principle. This construction allows a smooth
    transition between the zero-shot regime with no training example, the unsupervised
    regime with training examples but without class labels, and the supervised regime
    with training examples and with class labels. The resulting optimization problem
    can be solved efficiently, because several of the necessity steps have closed-form
    solutions. Through extensive experiments we show that the augmented representation
    achieves better results in terms of object categorization accuracy than the semantic
    representation alone.
alternative_title:
- LNCS
article_processing_charge: No
author:
- first_name: Viktoriia
  full_name: Sharmanska, Viktoriia
  id: 2EA6D09E-F248-11E8-B48F-1D18A9856A87
  last_name: Sharmanska
  orcid: 0000-0003-0192-9308
- first_name: Novi
  full_name: Quadrianto, Novi
  last_name: Quadrianto
- first_name: Christoph
  full_name: Lampert, Christoph
  id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
  last_name: Lampert
  orcid: 0000-0001-8622-7887
citation:
  ama: 'Sharmanska V, Quadrianto N, Lampert C. Augmented attribute representations.
    In: Vol 7576. Springer; 2012:242-255. doi:<a href="https://doi.org/10.1007/978-3-642-33715-4_18">10.1007/978-3-642-33715-4_18</a>'
  apa: 'Sharmanska, V., Quadrianto, N., &#38; Lampert, C. (2012). Augmented attribute
    representations (Vol. 7576, pp. 242–255). Presented at the ECCV: European Conference
    on Computer Vision, Florence, Italy: Springer. <a href="https://doi.org/10.1007/978-3-642-33715-4_18">https://doi.org/10.1007/978-3-642-33715-4_18</a>'
  chicago: Sharmanska, Viktoriia, Novi Quadrianto, and Christoph Lampert. “Augmented
    Attribute Representations,” 7576:242–55. Springer, 2012. <a href="https://doi.org/10.1007/978-3-642-33715-4_18">https://doi.org/10.1007/978-3-642-33715-4_18</a>.
  ieee: 'V. Sharmanska, N. Quadrianto, and C. Lampert, “Augmented attribute representations,”
    presented at the ECCV: European Conference on Computer Vision, Florence, Italy,
    2012, vol. 7576, no. PART 5, pp. 242–255.'
  ista: 'Sharmanska V, Quadrianto N, Lampert C. 2012. Augmented attribute representations.
    ECCV: European Conference on Computer Vision, LNCS, vol. 7576, 242–255.'
  mla: Sharmanska, Viktoriia, et al. <i>Augmented Attribute Representations</i>. Vol.
    7576, no. PART 5, Springer, 2012, pp. 242–55, doi:<a href="https://doi.org/10.1007/978-3-642-33715-4_18">10.1007/978-3-642-33715-4_18</a>.
  short: V. Sharmanska, N. Quadrianto, C. Lampert, in:, Springer, 2012, pp. 242–255.
conference:
  end_date: 2012-10-13
  location: Florence, Italy
  name: 'ECCV: European Conference on Computer Vision'
  start_date: 2012-10-07
date_created: 2018-12-11T12:01:32Z
date_published: 2012-10-01T00:00:00Z
date_updated: 2023-02-23T11:13:25Z
day: '01'
ddc:
- '000'
department:
- _id: ChLa
doi: 10.1007/978-3-642-33715-4_18
file:
- access_level: open_access
  checksum: bccdbe0663780d25a1e0524002b2d896
  content_type: application/pdf
  creator: dernst
  date_created: 2020-05-15T12:29:04Z
  date_updated: 2020-07-14T12:46:00Z
  file_id: '7861'
  file_name: 2012_ECCV_Sharmanska.pdf
  file_size: 6073897
  relation: main_file
file_date_updated: 2020-07-14T12:46:00Z
has_accepted_license: '1'
intvolume: '      7576'
issue: PART 5
language:
- iso: eng
month: '10'
oa: 1
oa_version: Submitted Version
page: 242 - 255
publication_status: published
publisher: Springer
publist_id: '3574'
quality_controlled: '1'
scopus_import: 1
status: public
title: Augmented attribute representations
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 7576
year: '2012'
...
---
_id: '3126'
abstract:
- lang: eng
  text: "In this work we propose a new information-theoretic clustering algorithm
    that infers cluster memberships by direct optimization of a non-parametric mutual
    information estimate between data distribution and cluster assignment. Although
    the optimization objective has a solid theoretical foundation it is hard to optimize.
    We propose an approximate optimization formulation that leads to an efficient
    algorithm with low runtime complexity. The algorithm has a single free parameter,
    the number of clusters to find. We demonstrate superior performance on several
    synthetic and real datasets.\r\n"
alternative_title:
- LNCS
author:
- first_name: Andreas
  full_name: Müller, Andreas
  last_name: Müller
- first_name: Sebastian
  full_name: Nowozin, Sebastian
  last_name: Nowozin
- first_name: Christoph
  full_name: Lampert, Christoph
  id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
  last_name: Lampert
  orcid: 0000-0001-8622-7887
citation:
  ama: 'Müller A, Nowozin S, Lampert C. Information theoretic clustering using minimal
    spanning trees. In: Vol 7476. Springer; 2012:205-215. doi:<a href="https://doi.org/10.1007/978-3-642-32717-9_21">10.1007/978-3-642-32717-9_21</a>'
  apa: 'Müller, A., Nowozin, S., &#38; Lampert, C. (2012). Information theoretic clustering
    using minimal spanning trees (Vol. 7476, pp. 205–215). Presented at the DAGM:
    German Association For Pattern Recognition, Graz, Austria: Springer. <a href="https://doi.org/10.1007/978-3-642-32717-9_21">https://doi.org/10.1007/978-3-642-32717-9_21</a>'
  chicago: Müller, Andreas, Sebastian Nowozin, and Christoph Lampert. “Information
    Theoretic Clustering Using Minimal Spanning Trees,” 7476:205–15. Springer, 2012.
    <a href="https://doi.org/10.1007/978-3-642-32717-9_21">https://doi.org/10.1007/978-3-642-32717-9_21</a>.
  ieee: 'A. Müller, S. Nowozin, and C. Lampert, “Information theoretic clustering
    using minimal spanning trees,” presented at the DAGM: German Association For Pattern
    Recognition, Graz, Austria, 2012, vol. 7476, pp. 205–215.'
  ista: 'Müller A, Nowozin S, Lampert C. 2012. Information theoretic clustering using
    minimal spanning trees. DAGM: German Association For Pattern Recognition, LNCS,
    vol. 7476, 205–215.'
  mla: Müller, Andreas, et al. <i>Information Theoretic Clustering Using Minimal Spanning
    Trees</i>. Vol. 7476, Springer, 2012, pp. 205–15, doi:<a href="https://doi.org/10.1007/978-3-642-32717-9_21">10.1007/978-3-642-32717-9_21</a>.
  short: A. Müller, S. Nowozin, C. Lampert, in:, Springer, 2012, pp. 205–215.
conference:
  end_date: 2012-08-31
  location: Graz, Austria
  name: 'DAGM: German Association For Pattern Recognition'
  start_date: 2012-08-28
date_created: 2018-12-11T12:01:32Z
date_published: 2012-08-14T00:00:00Z
date_updated: 2021-01-12T07:41:14Z
day: '14'
department:
- _id: ChLa
doi: 10.1007/978-3-642-32717-9_21
intvolume: '      7476'
language:
- iso: eng
month: '08'
oa_version: None
page: 205 - 215
publication_status: published
publisher: Springer
publist_id: '3573'
quality_controlled: '1'
scopus_import: 1
status: public
title: Information theoretic clustering using minimal spanning trees
type: conference
user_id: 3E5EF7F0-F248-11E8-B48F-1D18A9856A87
volume: 7476
year: '2012'
...
---
_id: '3127'
abstract:
- lang: eng
  text: "When searching for characteristic subpatterns in potentially noisy graph
    data, it appears self-evident that having multiple observations would be better
    than having just one. However, it turns out that the inconsistencies introduced
    when different graph instances have different edge sets pose a serious challenge.
    In this work we address this challenge for the problem of finding maximum weighted
    cliques.\r\n    We introduce the concept of most persistent soft-clique. This
    is subset of vertices, that 1) is almost fully or at least densely connected,
    2) occurs in all or almost all graph instances, and 3) has the maximum weight.
    We present a measure of clique-ness, that essentially counts the number of edge
    missing to make a subset of vertices into a clique. With this measure, we show
    that the problem of finding the most persistent soft-clique problem can be cast
    either as: a) a max-min two person game optimization problem, or b) a min-min
    soft margin optimization problem. Both formulations lead to the same solution
    when using a partial Lagrangian method to solve the optimization problems. By
    experiments on synthetic data and on real social network data, we show that the
    proposed method is able to reliably find soft cliques in graph data, even if that
    is distorted by random noise or unreliable observations."
article_processing_charge: No
author:
- first_name: Novi
  full_name: Quadrianto, Novi
  last_name: Quadrianto
- first_name: Christoph
  full_name: Lampert, Christoph
  id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
  last_name: Lampert
  orcid: 0000-0001-8622-7887
- first_name: Chao
  full_name: Chen, Chao
  id: 3E92416E-F248-11E8-B48F-1D18A9856A87
  last_name: Chen
citation:
  ama: 'Quadrianto N, Lampert C, Chen C. The most persistent soft-clique in a set
    of sampled graphs. In: <i>Proceedings of the 29th International Conference on
    Machine Learning</i>. ML Research Press; 2012:211-218.'
  apa: 'Quadrianto, N., Lampert, C., &#38; Chen, C. (2012). The most persistent soft-clique
    in a set of sampled graphs. In <i>Proceedings of the 29th International Conference
    on Machine Learning</i> (pp. 211–218). Edinburgh, United Kingdom: ML Research
    Press.'
  chicago: Quadrianto, Novi, Christoph Lampert, and Chao Chen. “The Most Persistent
    Soft-Clique in a Set of Sampled Graphs.” In <i>Proceedings of the 29th International
    Conference on Machine Learning</i>, 211–18. ML Research Press, 2012.
  ieee: N. Quadrianto, C. Lampert, and C. Chen, “The most persistent soft-clique in
    a set of sampled graphs,” in <i>Proceedings of the 29th International Conference
    on Machine Learning</i>, Edinburgh, United Kingdom, 2012, pp. 211–218.
  ista: 'Quadrianto N, Lampert C, Chen C. 2012. The most persistent soft-clique in
    a set of sampled graphs. Proceedings of the 29th International Conference on Machine
    Learning. ICML: International Conference on Machine Learning, 211–218.'
  mla: Quadrianto, Novi, et al. “The Most Persistent Soft-Clique in a Set of Sampled
    Graphs.” <i>Proceedings of the 29th International Conference on Machine Learning</i>,
    ML Research Press, 2012, pp. 211–18.
  short: N. Quadrianto, C. Lampert, C. Chen, in:, Proceedings of the 29th International
    Conference on Machine Learning, ML Research Press, 2012, pp. 211–218.
conference:
  end_date: 2012-07-01
  location: Edinburgh, United Kingdom
  name: 'ICML: International Conference on Machine Learning'
  start_date: 2012-06-26
date_created: 2018-12-11T12:01:33Z
date_published: 2012-06-01T00:00:00Z
date_updated: 2023-10-17T11:55:06Z
day: '01'
department:
- _id: ChLa
- _id: HeEd
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: http://arxiv.org/abs/1206.4652
month: '06'
oa: 1
oa_version: Preprint
page: 211-218
publication: Proceedings of the 29th International Conference on Machine Learning
publication_status: published
publisher: ML Research Press
publist_id: '3572'
quality_controlled: '1'
scopus_import: '1'
status: public
title: The most persistent soft-clique in a set of sampled graphs
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2012'
...
