Body Tracking with ARKit works very well as does the Unity integration into AR Foundation. However, the rig that Apple provides, as well as the version Unity includes in their sample project have some complexities that have made working with them challenging.
While initially I had thought to replace the sample controlled robot model I found it difficult. My solution was to keep the controlled robot and pair its movements to a second avatar and, if desired, overlay the positions and hide the controlled robot. The ARKit rig is not like other rigs (7 spine bones, 4 neck bones, different orientations, etc.) in common usage. The Unity version has no avatar associated with it so you are unable to access some of the built in HumanBones and retarteging functionality normally available. Attempts to rig my own version failed for various reasons.
Here I'm connecting the armature from the recently updated 3rd Person Unity Starter Assets to the AR Foundation samples ControlledRobot asset.
The primary requirement is to map the avatar bones to the controlled robot bones and the code synchronizes the respective joint rotations. I'm sure there are more elegant solutions, but I thought I would post my version on GitHub in case it helps anyone else attempting this. Feel free to make suggestions or point towards better solutions that may be out there.
Another avatar:
There are still issues with initial positions and offsets especially when live on an iOS device, and models lose tracking and experience jitter every now and than, so still a work in process.
Turns out the Unity water prefab from Standard Assets hololdoesn't work with Single Pass Instanced on HoloLens — unless I close one eye. It works with Multi Pass, but still doesn't appear in a photo.
Armada is a table top game using miniatures to simulate combat between Star Wars capital ships and squadrons. And, it turns out to translate really well to HoloLens. This post is mostly a series of notes related to what I tried and what I learned along the way.
I started off using using primitives for the ships, cubes and spheres. While this was workable and I could have built the entire game this way it was uninspiring. So I created 3D models using a Matter and Form scanner and the actual Armada miniatures. While, I was eventually able to get to reasonable quality scans, the process of converting those scans into clean, low-poly models that could be used in Unity required more time than I wanted to commit. So I switched to using various open source models and edited them in Blender (which was it's own learning adventure).
While Armada is played on a table, taking the game into 3D seemed obvious though it required changes to the rules that impacted game balance. These include the number of shields a ship has, the number of batteries that can fire, possible targeting surfaces and moving damage from one shield to another. I'm not sure what I did would work if this was a production game but, it was fun to try to put myself in the game designer's shoes for a bit.
The Armada miniature scale turned out to be too small for comfortable gaze targeting. I experimented with various sizes and ended up at 2X the physical model scale. This resulted in a play area that was a 2 meter cube or a 2x2x3 meter rectangle which is what is shown in the videos.
I tried and implemented many different user interface versions as I went, and while many of these remain in the project — some are decent others not so much, by the end I kept coming back to voice. I tried to always have a gaze or gesture option available as well, but voice almost always ended up being simpler and more flexible.
Squadron movement ended up being quite difficult and was re-implemented a half-dozen times. Precise movement of objects in 3D and placing them are in relation to other objects required, in many cases, for physically moving around them to check perspective. I'm still not happy with the result.
Overall, I probably implemented 75% of the rule set. I did everything from ship selection, tracking points, the movement tool, actual movement, combat, and victory scoring.
Some game features — command dials and targeting adjustments, which are core to the game, I automated with an algorithm making the decisions. Others, such as upgrade cards and obstacles, I didn't even attempt. I also started work on sharing so you could play the game with multiple headsets, but didn't manage to get it working using an emulator and a single HoloLens.
And, importantly, the game is actually pretty fun to play as is. Next up (maybe), D&D mass combat with holographic miniatures.
TL;DR Download Unity and create a project. Add a terrain game object. Size it to match the map proportions. Add the map as a texture. Use the fixed height terrain tools to paint the appropriate parts of the map to the right elevations. All done.
Well... and this is going to be a short post... it turns out to be really easy and not necessarily deserving of too much detail, but I'll pad it a bit to make it look a bit more impressive.
First off, I used Unity, which you can get for free. There are technical preview builds for AR/VR platforms — I used this one for HoloLens. In my case, as I'm building for HoloLens you need that though you could also use the HoloLens emulator. If you wanted you could also build for another platform — this process should work equally well for Occulus, Cardboard, etc., or even just for display within Unity or even output for PC, just wouldn't be a holographic virtual 3D map.
First I trimmed the edges of the Barovia map — not technically necessary but helps to reduce file and image sizes a bit. This left me with a map size of 7050 x 4400 pixels representing ~90K x ~56K feet or 27,538 meters by 17,187 meters.
The map of Barovia has contour lines which will be used for elevation. The contour lines are 1000 feet each or 10K feet from lowest to highest elevation.
The next step was a bit of math to determine the display size. Unity uses arbitrary units and HoloLens works on a scale of 1 Unity unit = 1 meter. I wanted the map to be big, so to fit (barely) my living room, I chose a scale factor of 2000 which ends up with a map size of 3.525 x 2.2 which is both the Unity map size and the size, in meters, of the resulting hologram.
Within Unity I created a 3D terrain. Select the terrain object, and go to settings. Set the length and width to 2.2 and 3.525 (the z and x axis). For the height, if using the same scale as the length and width, it would end up being around 0.05. However, when I tested this it was less dramatic than I would like. After experimenting with various options I settled on a 4x vertical scale on the y axis relative to the x and z, which ended up at 0.2 height. With my vertical scale set, each increase in a contour line — 1000 feet in the map — is an elevation increase of 0.02 in Unity.
The Unity terrain tools are not well loved, and many people use third party tools, but the built in tools are free and you can get what you need done, if a bit slowly. With the terrain selected choose the Paint Texture tool and then Edit Textures. Add the map (the map of Barovia that is) and set it to the same size as the terrain itself (2.2 by 3.525).
Next select the fixed height terrain tool. Using the hard edge brush with a 100 opacity and then a height equal to the elevation (in contour lines) you are painting x 0.02. I tend to vary the size of the brush as I go. Then I painted the map, raising each contour section of the map's elevation. When roughly done there was a fair amount of touch up but as the contour lines themselves stretch between the lower and higher elevations they handle distortion well. Below are examples from a call out map of Yester Hill.
I also looked at smoothing the terrain between elevations, but this quickly started to look like a couple orders of magnitude more work and a lot more artistic judgement than I had time for. Once that was done, I set various HoloLens specific settings, built and deployed to HoloLens. As HoloLens has built in video capture I could create video of the result and save it. And that was version one.
For additional versions I made it darker with moon and moonlight, added spatial sound effects — sound that emanates from a specific location and varies by direction and distance, fog clouds from the Unity standard assets, as well as lightning, and more.
I also added gaze events to open additional maps such as this one of Tsolenka Pass. This map was generated at mini / battle map scale to 5 map feet = 1 physical inch. The map has 1000 feet of elevation which results in a holographic object that is 16.67 feet tall.
As one of my first projects for HoloLens I decided to build a pit trap (in case the video below isn't loading).
I went through various versions of this. Here is an earlier attempt.
First, I picked up some traps assets from the Unity asset store. Then started messing around with a bunch of options. With Unity I spent a lot of time getting the center of the object positioned at the top of the pit so I could place the pit on the floor, and then sizing the various assets.
Once I added a trap door, I spent way too long adding a workable gravity hinge and a motor to close the lid based on your proximity to the pit and then starting and stopping the spike animation at the same time.
I spent even more time with this not working before I sorted out that I needed a collider on the pit GameObject before I could use the TapToPlace script to move it, and then get my collider to not block the pit door from opening, and making sure the collider wasn't hidden behind the spatial map once I placed it, and on and on.
By far the biggest challenge for this was working with spatial mapping and placement. I'll have a lot more to say about this later. Initially, I tried to remove the spatial map vertices where they intersected the pit and have the spatial map do the occlusion — to give the illusion that the pit trap was actually in the floor. Though I think I now understand why it didn't work, I wasn't able to sort this out at the time. So, instead I added quads with an Occlusion surface material along the lines of the HoloLens Academy 230 project.
Finally, I used the web cam to take a picture of the floor near the pit trap and then mapped that to the surface of the door to make it blend in.
When Google Glass first launched, I was excited. Despite the unfortunate (and, I think, unfair) stigma that became attached to Glass it felt like a first step to long awaited augmented and mixed reality devices. So I built a Glass app for KitchMe. And, being one of the first to launch, got my company some nice, pre-glasshole press. I spoke to Robert Scoble about it. Helped cook something for the WSJ. And then it was dead. And I was sad.
Then, suddenly we have a sudden explosion of VR / AR / Mixed Reality devices to play with. I picked up a copy of the Occulus Rift dev kit and played around with that. Tried out Google Cardboard and project Tango. Read whatever I could find on Magic Leap.
And now I'm monkeying around with HoloLens and I love it. It is what I had thought Glass would be, or might become. I have no opinion, as of yet, as to which devices, platforms, or paradigms will win... or if any of this first generation will. But I'm happy to get started with whatever is available.
While not strictly necessary, I also downloaded and learned a bit about Blender. Picked up a Matter and Form 3d Scanner and learned how to make some, barely, competent scans.
For me, this was almost all entirely new. Which is to say it has been a lot of fun learning a whole range of new apps and skills. To go along with all of this I was listening to Tim Ferris interviewing Kevin Kelly and this comment struck me:
Right now there are no experts in VR, we have no idea how it will work – what content, equipment, consumer breakthrough, etc. So anyone has the chance NOW to become that future expert
He discusses how to go about this: get the tools and start monkeying around (paraphrasing). Which is what I am doing, so, uh, confirmation bias I guess.
Anyway, I thought I would go ahead and lightly blog my various comical attempts at building various mixed reality projects! Hopefully, leading to a little self improvement if nothing else. Fun for me, if not for thee.
In Part I, I discussed the process of getting this awesome and massively detailed map of Northern Faerun by Mike Schley, onto Google Earth.
Going a little further than in previous posts, I wanted to see if I could display different maps at different zooms. For this I needed to create a KML file. This is actually easy to do manually, just create an xml file with a KML extension. You can get a lot of information on what is possible from KML files on Google Maps. Also, KML files use decimal degrees so I had to use the translated coordinates from previous posts.
I created two KML files. You can probably do it in one, but I had some issues with properly nesting the xml, so I separated them.
The first file just positions the same four maps (though I'm using versions I created without text overlays) I used in my other posts. Getting the "href" path in the xml right on a Windows machine took a while to sort out, but otherwise this was fairly straightforward.
Then I created a second KML to bring in a detailed view of the Dessarin Valley and set the zoom level so it would only show up when you were in close. For this, I used another one of Mike Schley's maps. It took some work to get the edges more or less lined up and the scale to match, and I could delete the borders, to clean it up a bit more, but in this case I was just testing out creating KML files so I wasn't overthinking it. Clearly, it would also be easier with maps designed for this purpose (Mike!?)
Here is the view zoomed out:
And zoomed in just a little:
I also added a placemark for Neverwinter, just to experiment with adding individual elements on the page. You can of course swap out the icons, position the text more elegantly but this was enough for a test.