Photogrametry Self Portrait

3D scanning, graphic design



Design Brief:

Create a self portrait using photogrammetry technology.

Documentation:

Photogrammetry is a scientific process used to take accurate, 3-dimensional measurements from 2-dimensional photographs or data. It can be used to create 3D models from 2D photos, to make topographic maps, to measure distance (sonar), and much more. Photogrammetry is used commonly in archaeology, topography, architecture, geology, film post-production, gaming, and more applications.

LIDAR photogrammetry works by shooting a laser from the LIDAR source (an iPhone, for example) at an object. LIDAR measures the amount of time it takes for the laser to reflect back, and creates a 3D model based on that data.

Some software uses a technique called Structure from Motion (SFM), and it can be used with uncalibrated cameras and in varied positioning. SFM works by determining how the camera calibrates light, as well as the camera’s positioning in relation to the subject. By processing many different photos from slightly different angles, software detects similar pixels and pixel patterns in each photo and creates points representing them in a 3D space. These points are voluminous tubes that run from the camera sensor to the projection object, and as more photos are taken the tubes intersect and create smaller, commonly-shared volume. The more commonly shares small volume tubes, the more precise a 3D model is.

(http://culturalheritageimaging.org/Technologies/Photogrammetry/)

My Questions:
  • What happens if we scan a space using photogrammetry versus an object?
  • What happens if we photograph a stationary object? What happens if we photograph a moving object?
  • What happens if we purposely feed the camera images with similar but slightly different pixel patterns?

My Methods:
For the LIDAR experiments, I borrowed a LIDAR phone and used the app Polycam, which generated 3D models.

The SFM experiments were done in the Polycam app as well, but I took a bunch of still photos and the app used them to generate 3D models.

Experiment 1: Room Scan Using SFM
Some patterns were interpreted by the camera as spatial patterns. I’m not sure why this happened, but I’m guessing it’s because I didn’t get more photos of that specific area, and the software got confused by the similar pixel patterns. The room was fairly detailed, with some distortion in the smaller objects, like night stands. I took 37 photos to get this result.

Experiment 2: Room Scan Using LIDAR
The software interpreted my mirror as another room, in less detail. There’s more detail in the shadows and small objects of the LIDAR-scanned room.
Experiment 3: Complex Pattern Object Scan with LIDAR
The software wasn’t able to detect the sharp edges. The image of the plant leaf came out as a slice of the room with a floating leaf. The way that the software interpreted parts of the room, as well as my hand, didn’t render much detail. It’s interesting to see the world through the eyes of the software.

Experiment 4: Objects in Motion using SFM
I took 29 photos of my dogs playing in the hallway from all angles. Because of the dogs’ movement, the software only captured parts of them, and the parts were disjointed (and pretty monstrous).

Experiment 5: Objects in Motion using LIDAR
LIDAR captured my dogs playing in motion and ‘saw’ them in different positions in the room, simultaneously. This makes me wonder why SFM captured them in one condensed area, while LIDAR captured them in multiple. It could possibly be due to SFM’s pixel averaging, LIDAR only sending one laser to one location before moving on, or both combined.

Experiment 6: Self-Portrait using SFM
Because the software requires the usage of a back camera, I had to do a mirror self portrait. It ended up very distorted, probably because of my inaccurate movements.

Experiment 7: Self-Portrait Using LIDAR
My LIDAR self-portrait came out abstract in a lot of similar ways to the SFM self-portrait. One thing that LIDAR seems to have that SFM lacks is the creation of shapes. The SFM portrait was like a painting on a curved canvas, whereas the LIDAR portrait generated an anthropomorphic shape.

Experiment 8: Still Object Using LIDAR
This still object was particularly hard to capture. It may be that the object is too complex, or that there’s too much background noise. Though I tried to get all angles, the software captured the surrounding area as well as the object.

Experiment 9: Image of an Image Using LIDAR
It seems like Polycam is able to capture surface patterns and images very well, even if the shapes aren’t very accurate. For this experiment, I took a picture of a photo of myself, and it was able to capture most of the image, albeit in a flat form.

Experiment 10: Realism Using LIDAR
I tried really hard to capture a few objects in realistic, accurate detail, but they never turned out very clear.

Final Image:
In order to achieve the final image, I experimented more with mirrors and the way that LIDAR photogrammetry created imagined worlds. I put multiple mirrors across from one another, and let the AI create its own world within my room.
Mark