Experimental evaluation of the accuracy of the “second generation” of Microsoft Kinect system, for using in stroke rehabilitation applications Mohammad Hossein Saadatzi 1 Home-based Stroke Rehabilitation Protocols with Kinect There is interest in Microsoft Kinect as an interface for home-based stroke rehabilitation protocols involving game-like movement exercise tasks. Potential benefits of such protocols are: • making therapy financially accessible to a large population of patients • enabling objective evaluation and remote tracking of patient progress • increasing patient motivation to complete repetitive movement tasks integral to motor function recovery. An experimental evaluation of the spatial accuracy, latency and capture rate of the motion capture data obtained from the Kinect is a critical validation step for these applications. Introduction 2 Experimental Evaluation of Kinect In this project, I report results of experimental evaluation of Kinect v2 as a motion capture interface. The results of two different algorithms (using RGB image and a known model vs. using Kinect depth sensor) for computing 3D coordinates of targets have been compared. Trajectories of the knee and wrist joints were recorded with Kinect and OptiTrack motion capture system. Previous related papers [1] D. Webster, O. Celik, "Systematic review of Kinect applications in elderly care and stroke rehabilitation," Journal of NeuroEngineering and Rehabilitation, vol. 11, no. 108, July 2014, pp. 1-21. [2] D. Webster, O. Celik, "Experimental evaluation of Microsoft Kinect’s accuracy and capture rate for stroke rehabilitation applications," in Proc. IEEE Haptics Symposium (HS 2014), pp. 455–460. [3] L. Pedro and G. Caurin, "Kinect Evaluation for Human Body Movement Analysis," in Proc. IEEE RAS/EMBS, June 2012, pp. 1856-1861. Introduction 3 • A well known improvement of the new Kinect for Windows sensor is the higher resolution of the image and depth streams. • The new version of Kinect has a different mechanism measuring depth: • Kinect v2 use time-of-flight as the core mechanism for depth retrieval each pixel in the 512 x 424 depth image of the new Kinect contains a real measured depth value (z-coordinate) with a much higher precision than the depth image of the Kinect V1. • The depth image of the old Kinect is based on the structured light technique results in an interpolated depth image that is based on a much lower number of samples than what the depth image resolution suggests. Kinect for Windows 1 Feature Kinect for Windows 2 640×480 @30 fps Color Camera 1920×1080 @30 fps 320×240 Depth Camera 512×424 ~4.5 m Max Depth Distance ~4.5 m 40 cm Min Depth Distance 50 cm 57 degrees Horizontal FOV 70 degrees 43 degrees Vertical FOV 60 degrees yes 20 Joints 2 Tilt Motor Skeleton Joints Skeletons Tracked no 25 Joints 6 2.0 USB Standard 3.0 Win 7, Win 8 Supported OS Win 8 Kinect v1 vs. Kinect v2 4 OptiTrack motion capture system • Passive marker-based • Consisted of eight V100:R2 cameras. • Processing data software: OptiTrack Tracking Tools 2.5.0 • The marker clusters were created using three sets of markers 7/16” in diameter • Data were recorded in mm at a sampling rate of 100 Hz. • Recorded data should be resampled at 30 Hz to match Kinect’s average capture rate. Method 5 Coding have been done in the C++ • OpenCV library • SDk 2.0 • IColorFrameReader Method • IDepthFrameReader Method • IInfraredFrameReader Method • IBodyFrameReader Method • ICoordinateMapper Method • MapCameraPointToColorSpace Method • MapColorFrameToCameraSpace Method Plotting have been done in MATLAB Method 6 Two different algorithms • using RGB image and a known model • solvePnP openCV function using Kinect depth sensor • ICoordinateMapper Method have been implemented. Video: http://youtu.be/Khq5x54y1eY • 0.6 0.5 Using Depth Sensor Using RGB Image 0.4 y (m) 0.3 0.2 0.1 0 -0.1 0 0.1 0.2 0.3 2.14 2.12 2.1 2.08 2.06 2.04 0.4 0.5 0.6 0.7 0.8 z (m) x (m) Results: first comparison 7 The joint positions have been measured using • OptiTrack motion capture system • And Kinect system IBodyFrameReader Method Video: http://youtu.be/jBIPaTpNuTQ 0.4 Kinect sensor OptiTrack system 0.2 y (m) 0 -0.2 -0.4 -0.6 -0.8 3 2.5 2 1.5 1 0.5 -0.2 0 0.2 0.4 0.6 0.8 z (m) x (m) Results: second comparison 8 Tracking the wrist and elbow joints.Video: http://youtu.be/m-ocUnBsTLE Kinect sensor OptiTrack system 1.2 1 y (m) 0.8 0.6 0.4 0.2 0 -0.2 1.5 1 0.5 0.4 0 -0.5 0 0.6 0.2 -0.2 z (m) x (m) Results: second comparison 9 Conclusion: • I evaluated the accuracy of the Kinect v2 motion capture system with two different methods. Challenges and next steps: • Find a proper way to do the alignments between the depth image and RGB image. • Finding the transformation between Kinect coordinate system and OptiTrack coordinate system. • Do the comparisons for the Infrared image. • Filling the IRB forms and collect data for completing the comparisons. Conclusion and Future work 10 Thanks for your attention Question? 11