Paper presented at CVPR 2023
We extends the capabilities of image-based 3D GANs to video editing by introducing a novel GAN inversion technique specifically tailored to 3D GANs. Besides traditional semantic face edits (e.g. for age and expression), we are the first to demonstrate edits that show novel views of the head enabled by the inherent properties of 3D GANs and our optical flow-guided compositing technique to combine the head with the background video.Paper presented at CVPR 2022
We demonstrate the first viable framework to generate high-quality synthesized full-body human images at state-of-the-art resolution. The full-body human domain is very challenging due to the large variance in pose, clothing and identity. To capture the rich details of the domain, we define a canvas network that generates a human body and one or more specialized Inset generators that enhance specific image regions.Technical paper presented at SIGGRAPH 2019
We generate large-scale textures by building on recent advances in the field of Generative Adversarial Networks. Our technique combine outputs of GANs trained on a smaller resolution to produce arbitrarily large-scale plausible texture map with virtually no boundary artifacts. We developed an interface to enable artistic control that allows user to create textures based on guidance images and modify and paint on the GAN textures interactively.Traineeship at Surgical Planning Lab, Brigham and Women's Hospital (Harvard Medical School)
During my traineeship at Harvard Medical School, I was working on GPU-accelerated browser-based visualization software. I designed and developed a prototype for a browser-based volume rendering solution and user interface components for web-based imaging and developed algorithms for browser-based GPU processing through grid-based PDE solvers for 2D and 3D image segmentation. Some of this code was integrated with open-source visualization and medical imaging software 3D Slicer.Master Thesis project at TU Vienna in collaboration with KAUST
In visualization applications with many objects, the rendering of the scene can be prohibitively expensive, thus compromising an interactive user experience. Our deferred visualization pipeline divides the visualization computation between a server and a thin client. The scene is preprocessed on the server and transferred to the client using an intermediate representation consisting of metadata and pre-rendered visualization of the scene's objects, where client-side interactivity is enabled even on large datasets.