When working in 3D graphics, one needs to load raw data, conduct various processing on it, visualize the results to help understanding, then save the output in different kinds of formats. Here we release the Mesh Library to facilitate all these aforementioned operations. This library is built on top of OpenGL and CGAL, with an easy-to-use Python interface. Other than the basic usages like data IO and interactive visualization, it also supports other more complex functionalities like texture rendering, visibility computation, and geometry arithmetic. We hope the release of this tool makes the entry to 3D world smoother for interested people.
A brain-computer interface (BCI) to assist and interpret thoughts from patients suffering diseases such as amyotrophic lateral sclerosis. This monitoring tool is especially suited for research and for reaching patients living in remote locations.
The Grassmann Averages PCA is a method for extracting the principal components from a sets of vectors, with the nice following properties: 1) it is of linear complexity wrt. the dimension of the vectors and the size of the data, which makes the method highly scalable, 2) It is more robust to outliers than PCA in the sense that it minimizes an L1 norm instead of the L2 norm of the standard PCA.
It comes with two variants: 1) the standard computation, that coincides with the PCA for normally distributed data, also referred to as the GA, 2) a trimmed variant, that is more robust to outliers, referred to the TGA.
We provide implementations for the Grassmann Average, the Trimmed Grassmann Average, and the Grassmann Median. The simplest is the Matlab implementation used in the CVPR 2014 paper, but we also provide a faster C++ implementation, which can be used either directly from C++ or through a Matlab wrapper interface.
The repository contains the following:
a C++ multi-threaded implementation of the GA and TGA
a C++ multi-threaded implementation of the EM-PCA (for comparisons)
binaries that computes the GA, TGA and EM-PCA on a set of images (frames of a video)
Annual Meeting of the Cognitive Science Society, July 2020 (conference)
To stay focused on their chosen tasks, people have to inhibit distractions. The underlying attention control skills can improve through reinforcement learning, which can be accelerated by giving feedback. We applied the theory of metacognitive reinforcement learning to develop a training app that gives people optimal feedback on their attention control while they are working or studying. In an eight-day field experiment with 99 participants, we investigated the effect of this training on people's productivity, sustained attention, and self-control. Compared to a control condition without feedback, we found that participants receiving optimal feedback learned to focus increasingly better (f = .08, p < .01) and achieved higher productivity scores (f = .19, p < .01) during the training. In addition, they evaluated their productivity more accurately (r = .12, p < .01). However, due to asymmetric attrition problems, these findings need to be taken with a grain of salt.
66th Spring Conference of the German Ergonomics Society, 2020 (conference)
Unser digitales Zeitalter lebt von Informationen und stellt unsere begrenzte Verarbeitungskapazität damit täglich auf die Probe. Gerade in der Wissensarbeit haben ständige Ablenkungen erhebliche Leistungseinbußen zur Folge. Unsere intelligente Anwendung ACTrain setzt genau an dieser Stelle an und verwandelt Computertätigkeiten in eine Trainingshalle für den Geist. Feedback auf Basis maschineller Lernverfahren zeigt anschaulich den Wert auf, sich nicht von einer selbst gewählten Aufgabe ablenken zu lassen. Diese metakognitive Einsicht soll zum Durchhalten motivieren und das zugrunde liegende Fertigkeitsniveau der Aufmerksamkeitskontrolle stärken. In laufenden Feldexperimenten untersuchen wir die Frage, ob das Training mit diesem optimalen Feedback die Aufmerksamkeits- und Selbstkontrollfertigkeiten im Vergleich zu einer Kontrollgruppe ohne Feedback verbessern kann.
Monthly Notices of the Royal Astronomical Society, 477, June 2018 (article)
The common envelope binary interaction remains one of the least understood phases in the evolution of compact binaries, including those that result in Type Ia supernovae and in mergers that emit detectable gravitational waves. In this work, we continue the detailed and systematic analysis of 3D hydrodynamic simulations of the common envelope interaction aimed at understanding the reliability of the results. Our first set of simulations replicate the five simulations of Passy et al. (a 0.88 M☉, 90 R☉ red giant branch (RGB) primary with companions in the range 0.1-0.9 M☉) using a new adaptive mesh refinement gravity solver implemented on our modified version of the hydrodynamic code ENZO. Despite smaller final separations obtained, these more resolved simulations do not alter the nature of the conclusions that are drawn. We also carry out five identical simulations but with a 2.0 M☉ primary RGB star with the same core mass as the Passy et al. simulations, isolating the effect of the envelope binding energy. With a more bound envelope, all the companions in-spiral faster and deeper, though relatively less gas is unbound. Even at the highest resolution, the final separation attained by simulations with a heavier primary is similar to the size of the smoothed potential even if we account for the loss of some angular momentum by the simulation. As a result, we suggest that an ∼2.0 M☉ RGB primary may possibly end in a merger with companions as massive as 0.6 M☉, something that would not be deduced using analytical arguments based on energy conservation.
Iaconi, R., Reichardt, T., Staff, J., De Marco, O., Passy, J., Price, D., Wurster, J., Herwig, F.
Monthly Notices of the Royal Astronomical Society, 464, pages: 4028-4044, 2017 (article)
We present hydrodynamic simulations of the common envelope binary interaction between a giant star and a compact companion carried out with the adaptive mesh refinement code enzo and the smooth particle hydrodynamics code phantom. These simulations mimic the parameters of one of the simulations by Passy et al. but assess the impact of a larger, more realistic initial orbital separation on the simulation outcome. We conclude that for both codes the post-common envelope separation is somewhat larger and the amount of unbound mass slightly greater when the initial separation is wide enough that the giant does not yet overflow or just overflows its Roche lobe. phantom has been adapted to the common envelope problem here for the first time and a full comparison with enzo is presented, including an investigation of convergence as well as energy and angular momentum conservation. We also set our simulations in the context of past simulations. This comparison reveals that it is the expansion of the giant before rapid in-spiral and not spinning up of the star that causes a larger final separation. We also suggest that the large range in unbound mass for different simulations is difficult to explain and may have something to do with simulations that are not fully converged.
The common envelope (CE) binary interaction occurs when a star transfers mass onto a companion that cannot fully accrete it. The interaction can lead to a merger of the two objects or to a close binary. The CE interaction is the gateway of all evolved compact binaries, all stellar mergers, and likely many of the stellar transients witnessed to date. CE simulations are needed to understand this interaction and to interpret stars and binaries thought to be the byproduct of this stage. At this time, simulations are unable to reproduce the few observational data available and several ideas have been put forward to address their shortcomings. The need for more definitive simulation validation is pressing and is already being fulfilled by observations from time-domain surveys. In this article, we present an initial method and its implementation for post-processing grid-based CE simulations to produce the light curve so as to compare simulations with upcoming observations. Here we implemented a zeroth order method to calculate the light emitted from CE hydrodynamic simulations carried out with the 3D hydrodynamic code Enzo used in unigrid mode. The code implements an approach for the computation of luminosity in both optically thick and optically thin regimes and is tested using the first 135 days of the CE simulation of Passy et al., where a 0.8 M ⊙ red giant branch star interacts with a 0.6 M ⊙ companion. This code is used to highlight two large obstacles that need to be overcome before realistic light curves can be calculated. We explain the nature of these problems and the attempted solutions and approximations in full detail to enable the next step to be identified and implemented. We also discuss our simulation in relation to recent data of transients identified as CE interactions.
IEEE Trans. Pattern Analysis and Machine Intelligence (PAMI), December 2015 (article)
In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA are not scalable. We note that in a zero-mean dataset, each observation spans a one-dimensional subspace, giving a point on the Grassmann manifold. We show that the average subspace corresponds to the leading principal component for Gaussian data. We provide a simple algorithm for computing this Grassmann Average (GA), and show that the subspace estimate is less sensitive to outliers than PCA for general distributions. Because averages can be efficiently computed, we immediately gain scalability. We exploit robust averaging to formulate the Robust Grassmann Average (RGA) as a form of robust PCA. The resulting Trimmed Grassmann Average (TGA) is appropriate for computer vision because it is robust to pixel outliers. The algorithm has linear computational complexity and minimal memory requirements. We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie; a task beyond any current method. Source code is available online.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems