more kinect-scans…

In order to understand how to increase Kinect-Fusion-scans i tried several things today.

First i wanted to digitize my bike:


First thing to say here is: that’s not a bike! the wheels are half missing! That’s right, it’s missing cause i set the volume to scan a little smaller than my bike. You can make it bigger but you will loose resolution: with my computer it can handle 640 X 640 X 512 = 209715200 Voxels. Put that into a cube of 1 m³ and you end up @ >2mm/Voxel.




thats it for now, next time it’s schorschtime again…

…’bout the resolution, baby…


Due to structured light the hand is recognized better than the keyboard (wich looks inverted here).

Smoother surfaces are reproduced more accurate.

It’s hard to tell about the resolution as it seems quite nice at the hand, good at the (reflecting) table and ‘funny'(nerdy) at the Keyboard at the same time! I have to measure it but it’s obvious that edges won’t come up like planes, the lesson of structured light?

Scan took about 20-30sek resulting in a ~50mb stl. snanned by me, not my desktop or hand.

Finally: Kinect Fusion





You may have heard that ‘Kinect-Fusion’ as seen at siggraph 2011 is now available through Microsofts Kinect SDK. It’s now possible to scan larger objects with just a Kinect. I use a Kinect for Xbox 360 (with Poweradapter, for old Xbox360) on a dualcore AMD Athlon64 x2 5200+ @2.71GHz and a GTX480. With that setup it is not simple to scan a hole human cause if you try to move the kinect around the person it looses tracking an rebuilds whatever you had already scanned.  nils5

The original resolution of the scan is around 2-3mm as seen on the next Picture:

nils4After a bit of playing around with the Demo i guess you have to adjust Clipping and resolution as well as the ‘dynamic’ of the object you want to scan. If i set all faders to maximum i end up with quite a small scanvolume. I dont know if that is due to my 1.5GB-Nvidia-Card or the Athlon or because it’s an Xbox360-Kinect.

I didn’t find a method to scan smaller objects by rotating them in front of the camera jet but this one: set min and max clipping to ‘isolate’ your object, than the software can only use the object itself to reorientate each frame:

micha1This picture of me is taken by myself using a static Kinect and an office-chair.