As previous shown, Gunnar Gerhardt has already made a lot of characters that slowly make it to the digital world. Here is a testprint of ‘Matchstickman’. I also printed it in white but as you may know: the natural white APS is almot photo-proof due to its translucency… so enjoy some more details in black:
By the way:
Matchstickman is around 110mm tall and comes from the 55mm original that got scanned with David-Laserscanner.
I like to remesh things in Blender using ‘Blocks’. You may recognize the original. The print above is printed @0.2mm Layers|14mm/s. Sliced with Kisslicer, Raft enabled. The Raft was actually very easy to remove even with SchorschXYs bowdened extruder.
This isn’t Gunnar but made by Gunnar. Gunnar is a friend who makes actionfigures and just made that little guy. He is made of wood and around 50mm high.
This isn’t David but it’s scanned using the David-Laserscannersoftware with an Optoma PK301 and a Microsoft Lifecam-Studio. It took about 1.5hrs to get the hole figure but i think you can make it in 30min. Here he sits in Blender, From 750k faces (50MB out of David) to 10% (75k faces).
This isn’t Schorsch but SchorschXY printed it.
That little guy hasn’t got a name jet, i think. How about “Lampshade”:
the last picture shows the two Prints of Lampshade a little better than the earlier one:
The white spots are where the support stucked on the model, you can easy get rid of that using acetone and your finger… A few missing perimeters are caused by low-quality-ABS (waterbubbles) and single-walled-perimeter (left one) The right one is a reduced one with only ~7500 triangles.
real pictures will follow.
This is a good day in the history of Blender AND 3DPrinting.
If you were envious of the guys that use special software to check models before printing you can relax. There is something on the way.
Blender is now able to check your file and solve a few problems or at least shows you where problems may occur.
I think this feature should be extended to generate code too. Using Cura or skeinforge or Slic3r or even yomama. Furthermore i want it to get a tab right beside the rendertab! Yes, ’cause there are people who do not want to rener an object but print it. Make an object, print an object. Shure, it is fun to look at pictures of what it ‘could’ look like. But as like rendering a decade ago was a pain in the ass for most people. It has now become indispensable and so will 3D-Printing.
For now you can export the final mesh to the folder of your choice. So it is ok as long as you do not own a printer. If so you would like to open it in the gcode-generator of your choice with just one click.
In order to understand how to increase Kinect-Fusion-scans i tried several things today.
First i wanted to digitize my bike:
First thing to say here is: that’s not a bike! the wheels are half missing! That’s right, it’s missing cause i set the volume to scan a little smaller than my bike. You can make it bigger but you will loose resolution: with my computer it can handle 640 X 640 X 512 = 209715200 Voxels. Put that into a cube of 1 m³ and you end up @ >2mm/Voxel.
thats it for now, next time it’s schorschtime again…
Due to structured light the hand is recognized better than the keyboard (wich looks inverted here).
Smoother surfaces are reproduced more accurate.
It’s hard to tell about the resolution as it seems quite nice at the hand, good at the (reflecting) table and ‘funny'(nerdy) at the Keyboard at the same time! I have to measure it but it’s obvious that edges won’t come up like planes, the lesson of structured light?
Scan took about 20-30sek resulting in a ~50mb stl. snanned by me, not my desktop or hand.
You may have heard that ‘Kinect-Fusion’ as seen at siggraph 2011 is now available through Microsofts Kinect SDK. It’s now possible to scan larger objects with just a Kinect. I use a Kinect for Xbox 360 (with Poweradapter, for old Xbox360) on a dualcore AMD Athlon64 x2 5200+ @2.71GHz and a GTX480. With that setup it is not simple to scan a hole human cause if you try to move the kinect around the person it looses tracking an rebuilds whatever you had already scanned.
The original resolution of the scan is around 2-3mm as seen on the next Picture:
After a bit of playing around with the Demo i guess you have to adjust Clipping and resolution as well as the ‘dynamic’ of the object you want to scan. If i set all faders to maximum i end up with quite a small scanvolume. I dont know if that is due to my 1.5GB-Nvidia-Card or the Athlon or because it’s an Xbox360-Kinect.
I didn’t find a method to scan smaller objects by rotating them in front of the camera jet but this one: set min and max clipping to ‘isolate’ your object, than the software can only use the object itself to reorientate each frame:
This picture of me is taken by myself using a static Kinect and an office-chair.
For those who cant imagine how H-Belt works, not really core-XY but kind of core. One single belt will drive both, X & Y axis on the Schorsch-Printer. I had that idea about 10 years ago while thinking about a way to fake signatures with a small pen-plotter. I know i am not the first one to come up with that idea but i don’t know why the guys over at Core XY are crossing their belt….