PUMA publications for /user/leonkokkoliadis/visus:bruhnashttps://puma.ub.uni-stuttgart.de/user/leonkokkoliadis/visus:bruhnasPUMA RSS feed for /user/leonkokkoliadis/visus:bruhnas2024-03-29T07:45:24+01:00Visual Quality Assessment for Motion Compensated Frame Interpolationhttps://puma.ub.uni-stuttgart.de/bibtex/25bb23b7440c02c044df312a30ed36a75/leonkokkoliadisleonkokkoliadis2020-07-06T12:54:02+02:002019 A05 B04 sfbtrr161 visus:bruhnas visus:maurerdl <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Hui Men" itemprop="url" href="/person/14d902f3be4a78cb17926363a1cc2e2dd/author/0"><span itemprop="name">H. Men</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Hanhe Lin" itemprop="url" href="/person/14d902f3be4a78cb17926363a1cc2e2dd/author/1"><span itemprop="name">H. Lin</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Vlad Hosu" itemprop="url" href="/person/14d902f3be4a78cb17926363a1cc2e2dd/author/2"><span itemprop="name">V. Hosu</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Daniel Maurer" itemprop="url" href="/person/14d902f3be4a78cb17926363a1cc2e2dd/author/3"><span itemprop="name">D. Maurer</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Andrés Bruhn" itemprop="url" href="/person/14d902f3be4a78cb17926363a1cc2e2dd/author/4"><span itemprop="name">A. Bruhn</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Dietmar Saupe" itemprop="url" href="/person/14d902f3be4a78cb17926363a1cc2e2dd/author/5"><span itemprop="name">D. Saupe</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX)</span>, </em></span><em>page <span itemprop="pagination">1-6</span>. </em><em><span itemprop="publisher">IEEE</span>, </em>(<em><span>2019<meta content="2019" itemprop="datePublished"/></span></em>)</span>Mon Jul 06 12:54:02 CEST 2020Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX)1-6Visual Quality Assessment for Motion Compensated Frame Interpolation20192019 A05 B04 sfbtrr161 visus:bruhnas visus:maurerdl Current benchmarks for optical flow algorithms evaluate the estimation quality by comparing their predicted flow field with the ground truth, and additionally may compare interpolated frames, based on these predictions, with the correct frames from the actual image sequences. For the latter comparisons, objective measures such as mean square errors are applied. However, for applications like image interpolation, the expected user's quality of experience cannot be fully deduced from such simple quality measures. Therefore, we conducted a subjective quality assessment study by crowdsourcing for the interpolated images provided in one of the optical flow benchmarks, the Middlebury benchmark. We used paired comparisons with forced choice and reconstructed absolute quality scale values according to Thurstone's model using the classical least squares method. The results give rise to a re-ranking of 141 participating algorithms w.r.t. visual quality of interpolated frames mostly based on optical flow estimation. Our re-ranking result shows the necessity of visual quality assessment as another evaluation metric for optical flow and frame interpolation benchmarks.Visual Quality Assessment for Interpolated Slow-Motion Videos Based on a Novel Databasehttps://puma.ub.uni-stuttgart.de/bibtex/29e2ba7ed4ac121e960caf0da5449f762/leonkokkoliadisleonkokkoliadis2020-06-29T11:09:24+02:002020 A05 B04 sfbtrr161 visus:bruhnas <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="H. Men" itemprop="url" href="/person/1c0e0bb7000ad6ae8cdb7bed74fd448a5/author/0"><span itemprop="name">H. Men</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="V. Hosu" itemprop="url" href="/person/1c0e0bb7000ad6ae8cdb7bed74fd448a5/author/1"><span itemprop="name">V. Hosu</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="H. Lin" itemprop="url" href="/person/1c0e0bb7000ad6ae8cdb7bed74fd448a5/author/2"><span itemprop="name">H. Lin</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="A. Bruhn" itemprop="url" href="/person/1c0e0bb7000ad6ae8cdb7bed74fd448a5/author/3"><span itemprop="name">A. Bruhn</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="D. Saupe" itemprop="url" href="/person/1c0e0bb7000ad6ae8cdb7bed74fd448a5/author/4"><span itemprop="name">D. Saupe</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX)</span>, </em></span><em>page <span itemprop="pagination">1-6</span>. </em>(<em><span>2020<meta content="2020" itemprop="datePublished"/></span></em>)</span>Mon Jun 29 11:09:24 CEST 2020Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX)1-6Visual Quality Assessment for Interpolated Slow-Motion Videos Based on a Novel Database20202020 A05 B04 sfbtrr161 visus:bruhnas Professional video editing tools can generate slow-motion video by interpolating frames from video recorded at astandard frame rate. Thereby the perceptual quality of such in-terpolated slow-motion videos strongly depends on the underlyinginterpolation techniques. We built a novel benchmark databasethat is specifically tailored for interpolated slow-motion videos(KoSMo-1k). It consists of 1,350 interpolated video sequences,from 30 different content sources, along with their subjectivequality ratings from up to ten subjective comparisons per videopair. Moreover, we evaluated the performance of twelve exist-ing full-reference (FR) image/video quality assessment (I/VQA)methods on the benchmark. In this way, we are able to show thatspecifically tailored quality assessment methods for interpolatedslow-motion videos are needed, since the evaluated methods –despite their good performance on real-time video databases – donot give satisfying results when it comes to frame interpolation.Visual Analytics and Annotation of Pervasive Eye Tracking Videohttps://puma.ub.uni-stuttgart.de/bibtex/29edf01db76416efe2b79053c0977f842/leonkokkoliadisleonkokkoliadis2020-06-26T13:24:50+02:002020 A07 B01 B05 sfbtrr161 visus:bruhnas visus:bullinas visus:kurzhakn visus:rodrigns visus:weiskopf <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Kuno Kurzhals" itemprop="url" href="/person/1d47d14f505f998258e45400d30ee66f3/author/0"><span itemprop="name">K. Kurzhals</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Nils Rodrigues" itemprop="url" href="/person/1d47d14f505f998258e45400d30ee66f3/author/1"><span itemprop="name">N. Rodrigues</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Maurice Koch" itemprop="url" href="/person/1d47d14f505f998258e45400d30ee66f3/author/2"><span itemprop="name">M. Koch</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Michael Stoll" itemprop="url" href="/person/1d47d14f505f998258e45400d30ee66f3/author/3"><span itemprop="name">M. Stoll</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Andres Bruhn" itemprop="url" href="/person/1d47d14f505f998258e45400d30ee66f3/author/4"><span itemprop="name">A. Bruhn</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Andreas Bulling" itemprop="url" href="/person/1d47d14f505f998258e45400d30ee66f3/author/5"><span itemprop="name">A. Bulling</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Daniel Weiskopf" itemprop="url" href="/person/1d47d14f505f998258e45400d30ee66f3/author/6"><span itemprop="name">D. Weiskopf</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA)</span>, </em></span><em>page <span itemprop="pagination">16:1-16:9</span>. </em><em><span itemprop="publisher">ACM</span>, </em>(<em><span>2020<meta content="2020" itemprop="datePublished"/></span></em>)</span>Fri Jun 26 13:24:50 CEST 2020Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA)16:1-16:9Visual Analytics and Annotation of Pervasive Eye Tracking Video20202020 A07 B01 B05 sfbtrr161 visus:bruhnas visus:bullinas visus:kurzhakn visus:rodrigns visus:weiskopf We propose a new technique for visual analytics and annotation of long-term pervasive eye tracking data for which a combined analysis of gaze and egocentric video is necessary. Our approach enables two important tasks for such data for hour-long videos from individual participants: (1) efficient annotation and (2) direct interpretation of the results. Exemplary time spans can be selected by the user and are then used as a query that initiates a fuzzy search of similar time spans based on gaze and video features. In an iterative refinement loop, the query interface then provides suggestions for the importance of individual features to improve the search results. A multi-layered timeline visualization shows an overview of annotated time spans. We demonstrate the efficiency of our approach for analyzing activities in about seven hours of video in a case study and discuss feedback on our approach from novices and experts performing the annotation task.Variational Large Displacement Optical Flow Without Feature Matches.https://puma.ub.uni-stuttgart.de/bibtex/29a2ca2e973cadc1df6401241a570ab8e/leonkokkoliadisleonkokkoliadis2020-03-05T11:50:02+01:002017 B04 sfbtrr161 visus:bruhnas visus:maurerdl <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Michael Stoll" itemprop="url" href="/person/1dca26b8b2c4ff23c9744a8d72909180f/author/0"><span itemprop="name">M. Stoll</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Daniel Maurer" itemprop="url" href="/person/1dca26b8b2c4ff23c9744a8d72909180f/author/1"><span itemprop="name">D. Maurer</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Andrés Bruhn" itemprop="url" href="/person/1dca26b8b2c4ff23c9744a8d72909180f/author/2"><span itemprop="name">A. Bruhn</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Energy Minimization Methods in Computer Vision and Pattern Recognition. EMMCVPR 2017. Lecture Notes in Computer Science</span>, </em></span><em> 10746, </em><em>page <span itemprop="pagination">79-92</span>. </em><em><span itemprop="publisher">Springer International Publishing</span>, </em>(<em><span>2017<meta content="2017" itemprop="datePublished"/></span></em>)</span>Thu Mar 05 11:50:02 CET 2020 Energy Minimization Methods in Computer Vision and Pattern Recognition. EMMCVPR 2017. Lecture Notes in Computer Science79-92Variational Large Displacement Optical Flow Without Feature Matches.1074620172017 B04 sfbtrr161 visus:bruhnas visus:maurerdl The optical flow within a scene can be an arbitrarily complex composition of motion patterns that typically differ regarding their scale. Hence, using a single algorithm with a single set of parameters is often not sufficient to capture the variety of these motion patterns. In particular, the estimation of large displacements of small objects poses a problem. In order to cope with this problem, many recent methods estimate the optical flow by a fusion of flow candidates obtained either from different algorithms or from the same algorithm using different parameters. This, however, typically results in a pipeline of methods for estimating and fusing the candidate flows, each requiring an individual model with a dedicated solution strategy. In this paper, we investigate what results can be achieved with a pure variational approach based on a standard coarse-to-fine optimization. To this end, we propose a novel variational method for the simultaneous estimation and fusion of flow candidates. By jointly using multiple smoothness weights within a single energy functional, we are able to capture different motion patterns and hence to estimate large displacements even without additional feature matches. In the same functional, an intrinsic model-based fusion allows to integrate all these candidates into a single flow field, combining sufficiently smooth overall motion with locally large displacements. Experiments on large displacement sequences and the Sintel benchmark demonstrate the feasibility of our approach and show improved results compared to a single-smoothness baseline method.Structure-from-motion-aware PatchMatch for Adaptive Optical Flow Estimationhttps://puma.ub.uni-stuttgart.de/bibtex/22fac1a2e6e4573cc4ce4e56012516008/leonkokkoliadisleonkokkoliadis2020-03-05T11:47:01+01:002018 B04 sfbtrr161 visus:bruhnas visus:maurerdl <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Daniel Maurer" itemprop="url" href="/person/1939cd39d1657ff8b9e8aae4d030ede61/author/0"><span itemprop="name">D. Maurer</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Nico Marniok" itemprop="url" href="/person/1939cd39d1657ff8b9e8aae4d030ede61/author/1"><span itemprop="name">N. Marniok</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Bastian Goldluecke" itemprop="url" href="/person/1939cd39d1657ff8b9e8aae4d030ede61/author/2"><span itemprop="name">B. Goldluecke</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Andrés Bruhn" itemprop="url" href="/person/1939cd39d1657ff8b9e8aae4d030ede61/author/3"><span itemprop="name">A. Bruhn</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science</span>, </em></span><em> 11212, </em><em>page <span itemprop="pagination">575-592</span>. </em><em><span itemprop="publisher">Springer International Publishing</span>, </em>(<em><span>2018<meta content="2018" itemprop="datePublished"/></span></em>)</span>Thu Mar 05 11:47:01 CET 2020Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science575-592Structure-from-motion-aware PatchMatch for Adaptive Optical Flow Estimation1121220182018 B04 sfbtrr161 visus:bruhnas visus:maurerdl Many recent energy-based methods for optical flow estimation rely on a good initialization that is typically provided by some kind of feature matching. So far, however, these initial matching approaches are rather general: They do not incorporate any additional information that could help to improve the accuracy or the robustness of the estimation. In particular, they do not exploit potential cues on the camera poses and the thereby induced rigid motion of the scene. In the present paper, we tackle this problem. To this end, we propose a novel structure-from-motion-aware PatchMatch approach that, in contrast to existing matching techniques, combines two hierarchical feature matching methods: a recent two-frame PatchMatch approach for optical flow estimation (general motion) and a specifically tailored three-frame PatchMatch approach for rigid scene reconstruction (SfM). While the motion PatchMatch serves as baseline with good accuracy, the SfM counterpart takes over at occlusions and other regions with insufficient information. Experiments with our novel SfM-aware PatchMatch approach demonstrate its usefulness. They not only show excellent results for all major benchmarks (KITTI 2012/2015, MPI Sintel), but also improvements up to 50% compared to a PatchMatch approach without structure information.Combining Shape from Shading and Stereo: A Joint Variational Method for Estimating Depth, Illumination and Albedohttps://puma.ub.uni-stuttgart.de/bibtex/2fcf19bdcb324265a77d726ec973881b0/leonkokkoliadisleonkokkoliadis2020-03-05T11:43:31+01:002018 B04 sfbtrr161 visus:bruhnas visus:maurerdl <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Daniel Maurer" itemprop="url" href="/person/1f3c722deb85c743ae124824ced748d18/author/0"><span itemprop="name">D. Maurer</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Yong Chul Ju" itemprop="url" href="/person/1f3c722deb85c743ae124824ced748d18/author/1"><span itemprop="name">Y. Ju</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Michael Breuß" itemprop="url" href="/person/1f3c722deb85c743ae124824ced748d18/author/2"><span itemprop="name">M. Breuß</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Andrés Bruhn" itemprop="url" href="/person/1f3c722deb85c743ae124824ced748d18/author/3"><span itemprop="name">A. Bruhn</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/PublicationIssue" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="journal">International Journal of Computer Vision</span>, </em> <em><span itemtype="http://schema.org/PublicationVolume" itemscope="itemscope" itemprop="isPartOf"><span itemprop="volumeNumber">126 </span></span>(<span itemprop="issueNumber">12</span>):
<span itemprop="pagination">1342-1366</span></em> </span>(<em><span>2018<meta content="2018" itemprop="datePublished"/></span></em>)</span>Thu Mar 05 11:43:31 CET 2020International Journal of Computer Vision121342-1366Combining Shape from Shading and Stereo: A Joint Variational Method for Estimating Depth, Illumination and Albedo12620182018 B04 sfbtrr161 visus:bruhnas visus:maurerdl Shape from shading (SfS) and stereo are two fundamentally different strategies for image-based 3-D reconstruction. While approaches for SfS infer the depth solely from pixel intensities, methods for stereo are based on a matching process that establishes correspondences across images. This difference in approaching the reconstruction problem yields complementary advantages that are worthwhile being combined. So far, however, most “joint” approaches are based on an initial stereo mesh that is subsequently refined using shading information. In this paper we follow a completely different approach. We propose a joint variational method that combines both cues within a single minimisation framework. To this end, we fuse a Lambertian SfS approach with a robust stereo model and supplement the resulting energy functional with a detail-preserving anisotropic second-order smoothness term. Moreover, we extend the resulting model in such a way that it jointly estimates depth, albedo and illumination. This in turn makes the approach applicable to objects with non-uniform albedo as well as to scenes with unknown illumination. Experiments for synthetic and real-world images demonstrate the benefits of our combined approach: They not only show that our method is capable of generating very detailed reconstructions, but also that joint approaches are feasible in practice.A Comparison of Isotropic and Anisotropic Second Order Regularisers for Optical Flowhttps://puma.ub.uni-stuttgart.de/bibtex/2fdaa3935973fb3fe4a7d3d070394c91d/leonkokkoliadisleonkokkoliadis2020-03-05T11:40:15+01:002017 B04 sfbtrr161 visus:bruhnas visus:maurerdl <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Daniel Maurer" itemprop="url" href="/person/156b06cb0b3b2ce7cc3931a7aaf300583/author/0"><span itemprop="name">D. Maurer</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Michael Stoll" itemprop="url" href="/person/156b06cb0b3b2ce7cc3931a7aaf300583/author/1"><span itemprop="name">M. Stoll</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Sebastian Volz" itemprop="url" href="/person/156b06cb0b3b2ce7cc3931a7aaf300583/author/2"><span itemprop="name">S. Volz</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Patrick Gairing" itemprop="url" href="/person/156b06cb0b3b2ce7cc3931a7aaf300583/author/3"><span itemprop="name">P. Gairing</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Andrés Bruhn" itemprop="url" href="/person/156b06cb0b3b2ce7cc3931a7aaf300583/author/4"><span itemprop="name">A. Bruhn</span></a></span></span>. </span><span class="additional-entrytype-information"><em> 10302, </em><em>page <span itemprop="pagination">537-549</span>. </em><em><span itemprop="publisher">Springer</span>, </em><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"></span>(<em><span>2017<meta content="2017" itemprop="datePublished"/></span></em>)</span>Thu Mar 05 11:40:15 CET 2020Scale Space and Variational Methods in Computer Vision. SSVM 2017. Lecture Notes in Computer Science537-549A Comparison of Isotropic and Anisotropic Second Order Regularisers for Optical Flow1030220172017 B04 sfbtrr161 visus:bruhnas visus:maurerdl Order-adaptive Regularisation for Variational Optical Flow: Global, Local and in Between.https://puma.ub.uni-stuttgart.de/bibtex/2a95de80f024c76cd0cb93acc3d184d67/leonkokkoliadisleonkokkoliadis2020-03-05T11:34:35+01:002017 B04 sfbtrr161 visus:bruhnas visus:maurerdl <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Daniel Maurer" itemprop="url" href="/person/198fc6b7da15d016500f836bb0ce29bd7/author/0"><span itemprop="name">D. Maurer</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Michael Stoll" itemprop="url" href="/person/198fc6b7da15d016500f836bb0ce29bd7/author/1"><span itemprop="name">M. Stoll</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Andrés Bruhn" itemprop="url" href="/person/198fc6b7da15d016500f836bb0ce29bd7/author/2"><span itemprop="name">A. Bruhn</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Scale Space and Variational Methods in Computer Vision. SSVM 2017. Lecture Notes in Computer Science</span>, </em></span><em> 10302, </em><em>page <span itemprop="pagination">550-562</span>. </em><em><span itemprop="publisher">Springer International Publishing</span>, </em>(<em><span>2017<meta content="2017" itemprop="datePublished"/></span></em>)</span>Thu Mar 05 11:34:35 CET 2020Scale Space and Variational Methods in Computer Vision. SSVM 2017. Lecture Notes in Computer Science550-562Order-adaptive Regularisation for Variational Optical Flow: Global, Local and in Between.1030220172017 B04 sfbtrr161 visus:bruhnas visus:maurerdl Recent approaches for variational motion estimation typically either rely on first or second order regularisation strategies. While first order strategies are more appropriate for scenes with fronto-parallel motion, second order constraints are superior if it comes to the estimation of affine flow fields. Since using the wrong regularisation order may lead to a significant deterioration of the results, it is surprising that there has not been much effort in the literature so far to determine this order automatically. In our work, we address the aforementioned problem in two ways. (i) First, we discuss two anisotropic smoothness terms of first and second order, respectively, that share important structural properties and that are thus particularly suited for being combined within an order-adaptive variational framework. (ii) Secondly, based on these two smoothness terms, we develop four different variational methods and with it four different strategies for adaptively selecting the regularisation order: a global and a local strategy based on half-quadratic regularisation, a non-local approach that relies on neighbourhood information, and a region based method using level sets. Experiments on recent benchmarks show the benefits of each of the strategies. Moreover, they demonstrate that adaptively combining different regularisation orders not only allows to outperform single-order strategies but also to obtain advantages beyond the ones of a frame-wise selection.Order-adaptive and Illumination-aware Variational Optical Flow Refinementhttps://puma.ub.uni-stuttgart.de/bibtex/22225cc7b043a8a9c42b6735e7e98aba4/leonkokkoliadisleonkokkoliadis2020-03-05T11:30:32+01:002017 B04 sfbtrr161 visus:bruhnas visus:maurerdl <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Daniel Maurer" itemprop="url" href="/person/15f0cd3bceb9a38d30e5867e3eed6d5be/author/0"><span itemprop="name">D. Maurer</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Andrés Bruhn" itemprop="url" href="/person/15f0cd3bceb9a38d30e5867e3eed6d5be/author/1"><span itemprop="name">A. Bruhn</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Michael Stoll" itemprop="url" href="/person/15f0cd3bceb9a38d30e5867e3eed6d5be/author/2"><span itemprop="name">M. Stoll</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Proceedings of the British Machine Vision Conference (BMVC)</span>, </em></span><em>page <span itemprop="pagination">150:1-150:13</span>. </em><em><span itemprop="publisher">BMVA Press</span>, </em>(<em><span>2017<meta content="2017" itemprop="datePublished"/></span></em>)</span>Thu Mar 05 11:30:32 CET 2020Proceedings of the British Machine Vision Conference (BMVC)150:1-150:13Order-adaptive and Illumination-aware Variational Optical Flow Refinement20172017 B04 sfbtrr161 visus:bruhnas visus:maurerdl Variational approaches form an inherent part of most state-of-the-art pipeline approaches for optical flow computation. As the final step of the pipeline, the aim is to refine an initial flow field typically obtained by inpainting non-dense matches in order to provide highly accurate results. In this paper, we take advantage of recent improvements in variational optical flow estimation to construct an advanced variational model for this final refinement step. By combining an illumination aware data term with an order adaptive smoothness term, we obtain a highly flexible model that is able to cope well with a broad variety of different scenarios. Moreover, we propose the use of an additional reduced coarse-to-fine scheme instead of an exclusive initialisation scheme, which not only allows to refine the initialisation but also allows to correct larger erroneous displacements. Experiments on recent optical flow benchmarks show the advantages of the advanced variational refinement and the reduced coarse to fine scheme.Order-Adaptive and Illumination-Aware Variational Optical Flow RefinementDirectional Priors for Multi-Frame Optical Flowhttps://puma.ub.uni-stuttgart.de/bibtex/23351fb1e60e55e46a6ea58a263b12e07/leonkokkoliadisleonkokkoliadis2020-03-05T11:26:31+01:002018 B04 sfbtrr161 visus:bruhnas visus:maurerdl <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Daniel Maurer" itemprop="url" href="/person/137673aeb097223b9e1c62202f0c68b00/author/0"><span itemprop="name">D. Maurer</span></a></span>, </span><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Michael Stoll" itemprop="url" href="/person/137673aeb097223b9e1c62202f0c68b00/author/1"><span itemprop="name">M. Stoll</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Andrés Bruhn" itemprop="url" href="/person/137673aeb097223b9e1c62202f0c68b00/author/2"><span itemprop="name">A. Bruhn</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Proceedings of the British Machine Vision Conference (BMVC)</span>, </em></span><em>page <span itemprop="pagination">106:1-106:13</span>. </em><em><span itemprop="publisher">BMVA Press</span>, </em>(<em><span>2018<meta content="2018" itemprop="datePublished"/></span></em>)</span>Thu Mar 05 11:26:31 CET 2020Proceedings of the British Machine Vision Conference (BMVC)conf/bmvc/2018106:1-106:13Directional Priors for Multi-Frame Optical Flow20182018 B04 sfbtrr161 visus:bruhnas visus:maurerdl Pipeline approaches that interpolate and refine an initial set of point correspondenceshave recently shown a good performance in the field of optical flow estimation. However,so far, these methods are typically restricted to two frames which makes exploiting tem-poral information difficult. In this paper, we show how such pipeline approaches can beextended to the temporal domain and how directional constraints can be incorporated tofurther improve the estimation. To this end, we not only suggest to exploit temporal infor-mation in the prefiltering step, we also propose a trajectorial refinement method that liftssuccessful concepts of recent variational two-frame methods to the multi-frame domain.Experiments demonstrate the usefulness of our pipeline approach. They do not only showgood results in general, they also demonstrate the clear benefits of using multiple framesand of imposing directional constraints on the prefiltering step and the refinement.ProFlow: Learning to Predict Optical Flowhttps://puma.ub.uni-stuttgart.de/bibtex/2ba7cdd0a4c2085b28cca9b0c45673de0/leonkokkoliadisleonkokkoliadis2020-03-05T11:22:22+01:002018 B04 sfbtrr161 visus:bruhnas visus:maurerdl <span data-person-type="author" class="authorEditorList "><span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Daniel Maurer" itemprop="url" href="/person/14b4a7d47c546330914710e314db044f0/author/0"><span itemprop="name">D. Maurer</span></a></span>, </span> and <span><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Andrés Bruhn" itemprop="url" href="/person/14b4a7d47c546330914710e314db044f0/author/1"><span itemprop="name">A. Bruhn</span></a></span></span>. </span><span class="additional-entrytype-information"><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Proceedings of the British Machine Vision Conference (BMVC)</span>, </em></span><em> 86:1-86:13, </em><em><span itemprop="publisher">BMVA Press</span>, </em>(<em><span>2018<meta content="2018" itemprop="datePublished"/></span></em>)</span>Thu Mar 05 11:22:22 CET 2020Proceedings of the British Machine Vision Conference (BMVC)ProFlow: Learning to Predict Optical Flow86:1-86:1320182018 B04 sfbtrr161 visus:bruhnas visus:maurerdl Temporal coherence is a valuable source of information in the context of optical flow estimation. However, finding a suitable motion model to leverage this information is a non-trivial task. In this paper we propose an unsupervised online learning approach based on a convolutional neural network (CNN) that estimates such a motion model individually for each frame. By relating forward and backward motion these learned models not only allow to infer valuable motion information based on the backward flow, they also help to improve the performance at occlusions, where a reliable prediction is particularly useful. Moreover, our learned models are spatially variant and hence allow to estimate non-rigid motion per construction. This, in turns, allows to overcome the major limitation of recent rigidity-based approaches that seek to improve the estimation by incorporating additional stereo/SfM constraints. Experiments demonstrate the usefulness of our new approach. They not only show a consistent improvement of up to 27% for all major benchmarks (KITTI 2012, KITTI 2015, MPI Sintel) compared to a baseline without prediction, they also show top results for the MPI Sintel benchmark -- the one of the three benchmarks that contains the largest amount of non-rigid motion.Awarded a CVPR 2018 Robust Vison Challenge Runner-Up Award