HoloLens Terrain Generation Demo Part 11 – Rendering Planes

After another long delay, I’m back; and this time, with the ability to render surface planes. The last month or so has been a busy one for me. I’ve been on the hunt for employment, which led me to work on a different project. So far, it hasn’t led to me getting a job, but it was an interesting project and I added some new tools to the tool chest. It’s not game or graphics related, so I won’t be talking about it here, but it did eat a couple weeks that otherwise would have gone into this project.

Getting the Planes

Once I got back to working on this demo, I found myself pretty stuck. Carrying on from last post, I had just created the following method to fill out the MeshData structure for each SurfaceMesh object.

// Return a MeshData object from the Raw data buffers.
MeshData SurfaceMesh::ConstructMeshData() {
    // we configured RealtimeSurfaceMeshRenderer to ensure that the data
    // we are receiving is in the correct format.
    // Vertex Positions: R16G16B16A16IntNormalized
    // Vertex Normals: R8G8B8A8IntNormalized
    // Indices: R16UInt (we'll convert it from here to R32Int. HoloLens Spatial Mapping doesn't appear to support this format directly.
 
    MeshData newMesh;
    newMesh.vertCount = m_surfaceMesh->VertexPositions->ElementCount;
    newMesh.verts = new XMFLOAT3[newMesh.vertCount];
    newMesh.normals = new XMFLOAT3[newMesh.vertCount];
    newMesh.indexCount = m_surfaceMesh->TriangleIndices->ElementCount;
    newMesh.indices = new INT32[newMesh.indexCount];
 
    XMSHORTN4* rawVertexData = (XMSHORTN4*)GetDataFromIBuffer(m_surfaceMesh->VertexPositions->Data);
    XMBYTEN4* rawNormalData = (XMBYTEN4*)GetDataFromIBuffer(m_surfaceMesh->VertexNormals->Data);
    UINT16* rawIndexData = (UINT16*)GetDataFromIBuffer(m_surfaceMesh->TriangleIndices->Data);
    float3 vertexScale = m_surfaceMesh->VertexPositionScale;
     
    for (int index = 0; index < newMesh.vertCount; ++index) {
        // read the current position as an XMSHORTN4.
        XMSHORTN4 currentPos = XMSHORTN4(rawVertexData[index]);
        XMFLOAT4 vals;
 
        // XMVECTOR knows how to convert XMSHORTN4 to actual floating point coordinates.
        XMVECTOR vec = XMLoadShortN4(&currentPos);
 
        // Store that into an XMFLOAT4 so we can read the values.
        XMStoreFloat4(&vals, vec);
 
        // Scale by the vertex scale.
        XMFLOAT4 scaledPos = XMFLOAT4(vals.x * vertexScale.x, vals.y * vertexScale.y, vals.z * vertexScale.z, vals.w);
 
        // Then we need to down scale the vector since it will be rescaled when rendering (ie divide by w).
        // do we?
        float4 downScaledPos = float4(scaledPos.x, scaledPos.y, scaledPos.z, scaledPos.w);
 
        newMesh.verts[index].x = downScaledPos.x;
        newMesh.verts[index].y = downScaledPos.y;
        newMesh.verts[index].z = downScaledPos.z;
 
        // now do the same for the normal.
        XMBYTEN4 currentNormal = XMBYTEN4(rawNormalData[index]);
        XMFLOAT4 norms;
        XMVECTOR norm = XMLoadByteN4(&currentNormal);
        XMStoreFloat4(&norms, norm);
        // No need to downscale. Does nothing.
        newMesh.normals[index].x = norms.x;
        newMesh.normals[index].y = norms.y;
        newMesh.normals[index].z = norms.z;
    }
 
    for (int index = 0; index < newMesh.indexCount; ++index) {
        newMesh.indices[index] = rawIndexData[index];
    }
 
    newMesh.transform = XMFloat4x4Identity;
 
    return newMesh;
}

My next step was to decide where I wanted to put the code to render the surface planes. It makes some sense to keep it in the SurfaceMesh object since the planes are generated based on the surface mesh; but I’m going to want to use the MergePlanes() method to take planes from neighbouring/overlapping SurfaceMeshes and combine them. That should mean moving the rendering up to the RealtimeSurfaceMeshRenderer class. This class already handles setting up the pipeline for the SurfaceMeshes, but it doesn’t actually render them. It calls the SurfaceMesh.Draw() method.
So do I make a new object for each plane after merging and keep a collection of those objects within the RealtimeSurfaceMeshRenderer? I didn’t really like the idea of adding another collection of objects to this class’ todo list. Plus, I’m going to want to do application-specific operations on the final list of planes. Things the RealtimeSurfaceMeshRenderer doesn’t need to know or care about.
With that in mind, I decided to make a new object, the SurfacePlaneRenderer, which takes the final list of planes from the RealtimeSurfaceMeshRenderer.
First, I needed a method to get the planes from each SurfaceMesh:

vector<BoundedPlane> SurfaceMesh::GetPlanes(SpatialCoordinateSystem^ baseCoordinateSystem) {
	if (m_isActive) {
		ClearLocalMesh();
		ConstructLocalMesh(baseCoordinateSystem);

		return FindPlanes(1, &m_localMesh, 5.0f);
	}

	// else, return an empty vector.
	return vector<BoundedPlane>();
}

That method is called for each SurfaceMesh from the RealtimeSurfaceMeshRenderer’s own GetPlanes() method:

vector<PlaneFinding::BoundedPlane> RealtimeSurfaceMeshRenderer::GetPlanes(SpatialCoordinateSystem ^baseCoordinateSystem) {
	vector<PlaneFinding::BoundedPlane> allPlanes;

	for (auto& iter : m_meshCollection) {
		auto planes = iter.second.GetPlanes(baseCoordinateSystem);
		// add all planes found from this surface mesh to the list.
		allPlanes.insert(allPlanes.end(), planes.begin(), planes.end());
	}

	// attempt to merge the planes created by the collection to create a smaller set of larger planes.
	auto mergedPlanes = PlaneFinding::MergePlanes(allPlanes.size(), allPlanes.data(), 0.0f, 5.0f);

	return mergedPlanes;
}

This is the ‘final’ version of the method. Obviously, I didn’t try to merge the planes until after I knew I had everything else working.

In our Main program, I settled on calling this method whenever our surfaces change. In other words, in the OnSurfacesChanged() method. This method already handled adding and updating surfaces. I have now added the following to the end of the method.

// The HolographicFrame has information that the app needs in order
// to update and render the current frame. The app begins each new
// frame by calling CreateNextFrame.
HolographicFrame^ holographicFrame = m_holographicSpace->CreateNextFrame();

// Get a prediction of where holographic cameras will be when this frame
// is presented.
HolographicFramePrediction^ prediction = holographicFrame->CurrentPrediction;
SpatialCoordinateSystem^ currentCoordinateSystem = m_referenceFrame->GetStationaryCoordinateSystemAtTimestamp(prediction->Timestamp);
// use it to find all surface planes and pass them to the planeRenderer.
// this is quite slow, so it is done asynchronously.
auto getPlanesTask = create_task([this, currentCoordinateSystem] {
	auto planes = m_meshRenderer->GetPlanes(currentCoordinateSystem);
	m_planeRenderer->UpdatePlanes(planes, currentCoordinateSystem);
});

As my comments say, getting the planes is quite slow. If you tried to do it every frame (as I did), you’ll get about 1fps, maybe less. Even performing it only when the surfaces update, without the asynchronous call, will cause a pause for about a second or so. With the asynchronous call, everything runs fine and the planes pop in not long after the update happens.
You’ve probably also noticed that the GetPlanes() methods are taking in a SpatialCoordinateSystem. In fact, I had to jump through some hoops to create that coordinate system within the OnSurfacesChanged() method1. If you look at my original implementation of the ConstructLocalMesh() method, I had set the transform object to the Identity Matrix. Basically, it was doing nothing. The algorithm works fine like this, but we have no idea where in space the planes are if we haven’t got a transform matrix. I needed the current frame’s coordinate system to create a transform matrix from the surface’s default coordinate system to our world.
As it happens, while I was mucking about and reading through the FindPlanes() function, I realized that the function is expecting to get the scale of the surface vertices from the transform matrix. I had been following a random forum post where they had scaled the vertices directly, but I think adding the scaling to the transform and keeping the vertices in a normalized state makes more sense.
Our latest version of the ConstructLocalMesh() method now looks like this:

void SurfaceMesh::ConstructLocalMesh(SpatialCoordinateSystem^ baseCoordinateSystem) {
	// we configured RealtimeSurfaceMeshRenderer to ensure that the data
	// we are receiving is in the correct format.
	// Vertex Positions: R16G16B16A16IntNormalized
	// Vertex Normals: R8G8B8A8IntNormalized
	// Indices: R16UInt (we'll convert it from here to R32Int. HoloLens Spatial Mapping doesn't appear to support this format directly.

	m_localMesh.vertCount = m_surfaceMesh->VertexPositions->ElementCount;
	m_localMesh.verts = new XMFLOAT3[m_localMesh.vertCount];
	m_localMesh.normals = new XMFLOAT3[m_localMesh.vertCount];
	m_localMesh.indexCount = m_surfaceMesh->TriangleIndices->ElementCount;
	m_localMesh.indices = new INT32[m_localMesh.indexCount];

	XMSHORTN4* rawVertexData = (XMSHORTN4*)GetDataFromIBuffer(m_surfaceMesh->VertexPositions->Data);
	XMBYTEN4* rawNormalData = (XMBYTEN4*)GetDataFromIBuffer(m_surfaceMesh->VertexNormals->Data);
	UINT16* rawIndexData = (UINT16*)GetDataFromIBuffer(m_surfaceMesh->TriangleIndices->Data);
	float3 vertexScale = m_surfaceMesh->VertexPositionScale;
	
	for (int index = 0; index < m_localMesh.vertCount; ++index) {
		// read the current position as an XMSHORTN4.
		XMSHORTN4 currentPos = rawVertexData[index];
		XMFLOAT4 vals;

		// XMVECTOR knows how to convert XMSHORTN4 to actual floating point coordinates.
		XMVECTOR vec = XMLoadShortN4(&currentPos);

		// Store that into an XMFLOAT4 so we can read the values.
		XMStoreFloat4(&vals, vec);

		m_localMesh.verts[index] = XMFLOAT3(vals.x, vals.y, vals.z);

		// now do the same for the normal.
		XMBYTEN4 currentNormal = rawNormalData[index];
		XMFLOAT4 norms;
		XMVECTOR norm = XMLoadByteN4(&currentNormal);
		XMStoreFloat4(&norms, norm);

		m_localMesh.normals[index] = XMFLOAT3(norms.x, norms.y, norms.z);
	}

	for (int index = 0; index < m_localMesh.indexCount; ++index) {
		m_localMesh.indices[index] = rawIndexData[index];
	}

	// Get the transform to the current reference frame (ie model to world)
	auto tryTransform = m_surfaceMesh->CoordinateSystem->TryGetTransformTo(baseCoordinateSystem);
	
	XMMATRIX transform;
	if (tryTransform) {
		// If the transform can be acquired, this spatial mesh is valid right now and
		// we have the information we need to draw it this frame.
		transform = XMLoadFloat4x4(&tryTransform->Value);
	} else {
		// If the transform is not acquired, the spatial mesh is not valid right now
		// because its location cannot be correlated to the current space.
		// for now, I'm just setting the transform to the identity matrix. We really should never 
		// get here anyway because the same check happens on update and
		// the surface will be set inactive. We check in GetPlanes()
		// before calling this method whether the surface is active.
		transform = XMLoadFloat4x4(&XMFloat4x4Identity);
	}

	// Add a scaling factor to our transform to go from mesh to world.
	XMMATRIX scaleTransform = XMMatrixScalingFromVector(XMLoadFloat3(&vertexScale));

	// save the transform to the local MeshData object.
	XMStoreFloat4x4(&m_localMesh.transform, scaleTransform * transform);
}

The for loop is simpler now and the scaling and transform is all handled separately.
From here, I think we can move on to the actual rendering of the planes.

Rendering Planes

I won’t go over the simple stuff like setting up the shaders and shader pipeline. I think that stuff has been pretty well covered by now. The important pieces of the SurfacePlaneRenderer are the Update(), UpdatePlanes(), and CreateVertexResources() methods.

The SurfacePlaneRenderer object is instantiated at start up. It is initialized to contain an empty list of BoundedPlanes. As mentioned above, the list of Planes will be updated when the surfaces update. At that time, UpdatePlanes() is called. The current frame’s coordinate system is passed in so that the SurfacePlaneRenderer will know the initial positioning of the planes. Any existing planes are destroyed and the new list is saved.

void SurfacePlaneRenderer::UpdatePlanes(vector<BoundedPlane> newList, Windows::Perception::Spatial::SpatialCoordinateSystem^ cs) {
	// clear the old list and copy the new list.
	m_planeList.clear();

	m_planeList = newList;
	
	// Update the coordinate system
	m_coordinateSystem = cs;

	CreateVertexResources();
}

The CreateVertexResources() method can now generate a vertex buffer for the new planes. This is done by defining a normalized quad within the unit space of the Oriented Bounding Box containing the plane. We can then use the Bounding Box’s Center, Extents, and Orientation members to create Translation, Scaling, and Rotation matrices respectively that will transform our quad into the initial coordinate system we used to find the planes in. This oriented quad is what is then saved in the vertex buffer.

void SurfacePlaneRenderer::CreateVertexResources() {
	if (m_planeList.size() < 1) {
		// No planes to draw.
		return;
	}

	// resources are created off-thread, so that they don't affect rendering latency.
	auto taskOptions = Concurrency::task_options();
	auto task = concurrency::create_task([this]() {
		// lock the vertex buffer down until we are done rebuilding it.
		std::lock_guard<std::mutex> guard(m_planeListLock);

		// reset the existing vertex buffer.
		m_vertexBuffer.Reset();

		// Build a vertex buffer containing 6 vertices (2 triangles) for each plane, representing the quad of the bounded plane.
		int numPlanes = m_planeList.size();
		int numVerts = numPlanes * 6;
		std::vector<XMFLOAT3> vertexList;

		// define the unit space vertices of the plane we are rendering.
		static const XMVECTOR verts[6] = {
			{ -1.0f, -1.0f, 0.0f, 0.0f },
			{ -1.0f,  1.0f, 0.0f, 0.0f },
			{  1.0f, -1.0f, 0.0f, 0.0f },
			{ -1.0f,  1.0f, 0.0f, 0.0f },
			{  1.0f,  1.0f, 0.0f, 0.0f },
			{  1.0f, -1.0f, 0.0f, 0.0f }
		};

		for (auto p : m_planeList) {
			// for each plane in our list, build a quad and add the vertices to our verts list.
			// Our plane is defined as being centered in the bounding box,
			// with the z axis always being the thinnest axis.
			auto center = p.bounds.Center;
			auto extents = p.bounds.Extents;

			// transformation matrices to go from unit space to object space.
			XMMATRIX world = XMMatrixRotationQuaternion(XMLoadFloat4(&p.bounds.Orientation));
			XMMATRIX scale = XMMatrixScaling(extents.x, extents.y, extents.z);
			XMMATRIX translate = XMMatrixTranslation(center.x, center.y, center.z);

			for (auto i = 0; i < 6; ++i) {
				XMMATRIX transform = XMMatrixMultiply(scale, world);

				transform = XMMatrixMultiply(transform, translate);

				XMVECTOR v = XMVector3Transform(verts[i], transform);
				XMFLOAT3 vec;
				XMStoreFloat3(&vec, v);
				vertexList.push_back(vec);
			}
		}

		// create the vertex buffer.
		CD3D11_BUFFER_DESC descBuffer(sizeof(XMFLOAT3) * vertexList.size(), D3D11_BIND_VERTEX_BUFFER);
		D3D11_SUBRESOURCE_DATA dataBuffer;
		dataBuffer.pSysMem = vertexList.data();
		DX::ThrowIfFailed(m_deviceResources->GetD3DDevice()->CreateBuffer(&descBuffer, &dataBuffer, &m_vertexBuffer));
	});
}

Each frame, we will need to call Update() to generate a transformation matrix from the initial coordinate system to the new frame’s coordinate system. This transform is stored in the modelToWorld matrix of the SurfacePlaneRenderer’s constant buffer for use in the Vertex Shader.

void SurfacePlaneRenderer::Update(SpatialCoordinateSystem^ baseCoordinateSystem) {
	if (m_planeList.size() < 1) {
		return;
	}

	// Transform to the correct coordinate system from our anchor's coordinate system.
	auto tryTransform = m_coordinateSystem->TryGetTransformTo(baseCoordinateSystem);
	XMMATRIX transform;
	if (tryTransform) {
		// If the transform can be acquired, this spatial mesh is valid right now and
		// we have the information we need to draw it this frame.
		transform = XMLoadFloat4x4(&tryTransform->Value);
	}
	else {
		// just use the identity matrix if we can't load the transform for some reason.
		transform = XMMatrixIdentity();
	}

	XMStoreFloat4x4(&m_constantBufferData.modelToWorld, XMMatrixTranspose(transform));

	// Use the D3D device context to update Direct3D device-based resources.
	const auto context = m_deviceResources->GetD3DDeviceContext();

	// Update the model transform buffer for the hologram.
	context->UpdateSubresource(m_constantBuffer.Get(), 0, nullptr, &m_constantBufferData, 0, 0);
}

The rest of the render code is pretty much the same as any other class. Let’s get to a picture.

Floor and Wall planes.


I’m sorry if the image isn’t clear, but what you’re looking at is a bed in wireframe and the surface planes of the walls and floor rendered in blue. You’ll notice that the bed has no plane even though it should be flat enough. Also, not visible in this photo, the bench at the end of the bed has no plane either.
FindPlanes.cpp constains some constants used in the plane finding algorithm. One of them is cMinimumPlaneSize. I’m not sure about the math defining the number, but it should be the area of the plane in meters squared. The initial value was 0.125f. I played around with this a found that 0.001f made most things I cared about show up as planes. Microsoft’s own comments indicate they’d like to expose this variable and others to the end user. Once I get into creating a GUI for this demo, I’ll likely expose at least this one.
For now, however, I’ve just changed the constant. In the following image, you can see there are now planes for the pillows, and there appears to be three planes for the surface of the bed. You can also see vertical planes down the side of the bed as well.

Planes found with a minimum plane size of 0.001


At this point, I activated the MergePlanes() method to get the next image. As you can see, the two planes for the pillows are now one. The three planes for the bed are now also one.

After merging planes.


You can probably tell already that there is a problem. In fact, you could probably tell from the last image. The plane for the bed is askew and a little too big. I’m pretty sure this is a problem with the algorithm. I might try tweaking the constants to see if any help, but I can tell you that the same issue happens with this bed if you build this app in Unity2, so it’s not my code. It doesn’t seem to really happen with many objects, so it might just be the orientation of the bed or how the mesh is specified around it. It’s close enough for me.

Limiting the Planes to What We Care About

Now that we can find, merge, and render all the planes, I don’t actually care about most of them. Vertical planes, planes on an angle, or upside-down planes like the ceiling really aren’t useful surfaces to render a terrain on. All I really want are Tables and Floors. I haven’t come up yet with a method for differentiating between Tables and Floors, but I have added a simple method to identify and filter for Floors.

For now, I define a Floor (ie, any plane we can build our terrain on) as any plane whose normal is pointing straight up. The problem is figuring out which planes actually meet this requirement. You see, the normals returned by the plane finding algorithm are still in the original surface space. You would need to transform each plane’s normal into the correct coordinate system and then test whether it points straight up prior to adding it to your final list of planes. Instead, I decided the plane finding algorithm can do the work for us.
If you check the SnapToGravity() method in Util.cpp, you’ll see that it actually is already checking the plane against the correct Up vector to determine which planes need to be snapped to horizontal or vertical based on the snapToGravityThreshold variable (I explained this last post). All I need to do is save the information this method is already calculating.
To that end, I created a small enumerated type called SurfaceType and added a variable to the Plane structure.

enum SurfaceType { WALL, CEILING, FLOOR, UNKNOWN };

struct Plane {
	DirectX::XMFLOAT3 normal;
	FLOAT d;
	SurfaceType surface;

	Plane() {}
	Plane(const DirectX::XMFLOAT3& normal, FLOAT d) : normal(normal), d(d) {}
	Plane(const DirectX::XMVECTOR& vec) { StoreVector(vec); }

	void StoreVector(const DirectX::XMVECTOR& vec) {
		XMStoreFloat4(reinterpret_cast<DirectX::XMFLOAT4*>(this), vec);
	}

	const DirectX::XMVECTOR AsVector() const {
		return XMLoadFloat4(reinterpret_cast<const DirectX::XMFLOAT4*>(this));
	}
};

Now, in the SnapToGravity() method, we can set the surface type of the plane.

bool SnapToGravity(_Inout_ Plane* plane, _Inout_opt_ XMFLOAT3* tangent, _In_ const XMFLOAT3& center, float snapToGravityThreshold, _In_ const XMVECTOR& vUp)
    {
        XMVECTOR vNormal = XMLoadFloat3(&plane->normal);
        XMVECTOR vCenter = XMLoadFloat3(&center);

        float dotGravity = XMVectorGetX(XMVector3Dot(vNormal, vUp));
        float dotProductThreshold = cosf(XMConvertToRadians(snapToGravityThreshold));
        bool isGravityAligned = false;

        // check for nearly horizontal planes
        if (dotGravity > dotProductThreshold)
        {
            vNormal = vUp;
			plane->surface = FLOOR;
        }
        else if (dotGravity < -dotProductThreshold)
        {
            vNormal = -vUp;
			plane->surface = CEILING;
        }
        else
        {
            // check for nearly vertical planes
            XMVECTOR vNormalProjectedPerpendicularToGravity = vNormal - (vUp * dotGravity);
            float dotPerpendicularToGravity = XMVectorGetX(XMVector3Length(vNormalProjectedPerpendicularToGravity));
            if (fabs(dotPerpendicularToGravity) > dotProductThreshold)
            {
                vNormal = XMVector3Normalize(vNormalProjectedPerpendicularToGravity);
                isGravityAligned = true;
				plane->surface = WALL;
            }
            else
            {
                // plane should not be snapped, so exit without modifying plane/tangent
				plane->surface = UNKNOWN;
                return false;
            }
        }

        // update the plane equation
        plane->StoreVector(XMPlaneFromPointNormal(vCenter, vNormal));

        // update the tangent vector
        if (tangent != nullptr)
        {
            XMVECTOR vTangent = (isGravityAligned)
                ? XMVector3Cross(vNormal, vUp)
                : XMVector3Cross(XMVector3Cross(vNormal, XMLoadFloat3(tangent)), vNormal);

            XMStoreFloat3(tangent, XMVector3Normalize(vTangent));
        }

        return isGravityAligned;
    }
}

Our UpdatePlanes() method can now simply look for Floors and discard anything else.

void SurfacePlaneRenderer::UpdatePlanes(vector<BoundedPlane> newList, Windows::Perception::Spatial::SpatialCoordinateSystem^ cs) {
	// clear the old list and copy the new list.
	m_planeList.clear();

	for (auto p : newList) {
		// for each plane in our list, check if it
		// is a PlaneFinding::FLOOR. This means it has a
		// normal pointing directly up. These are the only
		// planes we want.
		if (p.plane.surface == PlaneFinding::FLOOR) {
			m_planeList.push_back(p);
		}
	}

	// Update the coordinate system
	m_coordinateSystem = cs;

	CreateVertexResources();
}

Finally, I also needed to edit the MergePlanes() function, found in MergePlanes.cpp as it generates a new plane and I needed to ensure that plane also has a surface type set.

//...
if (totalArea > minArea) {
	averageCenter /= totalArea;
	averageNormal = XMVector3Normalize(averageNormal);
	XMVECTOR averagePlane = XMPlaneFromPointNormal(averageCenter, averageNormal);
	bool isGravityAligned = false;
	SurfaceType st = UNKNOWN;

	if (snapToGravityThreshold != 0.0f) {
		Plane plane = Plane(averagePlane);
		XMFLOAT3 center;

		XMStoreFloat3(&center, averageCenter);

		isGravityAligned = SnapToGravity(&plane, nullptr, center, snapToGravityThreshold, cUpDirection);

		averagePlane = plane.AsVector();
		st = plane.surface;
	}

	Plane plane = Plane(averagePlane);
	plane.surface = st;
	BoundingOrientedBox bounds = GetTightBounds(boundVerts, averagePlane, isGravityAligned);

	planes.push_back({ plane, bounds, totalArea }); // add all our aggregated information for this clique to the surface observer plane, then return it
}
//...

And with those changes, we are now at the point where we can find and render only horizontal upward-facing planes. The next image is kind of terrible, but it shows planes for only the floor, the bed, and the bench (just visible at the very bottom).

Just floors

From here, my next goal will be to make these planes selectable. I’d like to have the one you’re currently looking at change colour while you’re looking at it, so you know what you’re looking at. Then I want to be able to perform an air-tap gesture (click) to select that plane to lock the terrain to. Hopefully that doesn’t take me as long as the last couple of steps.

For the latest version of the code, see GitHub.
Traagen