Using a NextWindow multi-touch screen
Keywords:
NextWindow,
touchscreen,
multitouch,
interaction
Author(s): Peter Eschler
Date: 2009-11-29
Summary: This tutorial explains how to use a NextWindow (multi-)touchscreen.
Introduction
NextWindow is a company developing optical touch screen technology. InstantReality supports the NextWindow touchscreens on the Windows platform, there is currently no support on OSX and Linux.
Using an IOSensor to access raw multi-touch data
In the first example we create an IOSensor of type "NextWindow". You can find more information on the IOSensor node in the device tutorial. In order to use a NextWindow touchscreen simply add the following code to your scene:
Code: IOSensor for NextWindow touchscreens
<IOSensor DEF='nextWindow' type='NextWindow'> <field accessType='initializeOnly' name='device' type='SFString' value='0'/> <field accessType='outputOnly' name='Positions' type='MFVec3f'/> </IOSensor>
The device field should be '0' unless you have more than one NextWindow screen connected to your system.
The Positions output field now sends a new MFVec3f whenever there are touches on the screen. The MFVec3f contains an entry for every touch on the screen. Each SFVec3f contains the 2D-position on the touchscreen (in pixel coordinates) and an id. The id starts at 0 and is incremented up to 31 and then starting at 0 again. The id can be used to detect new, modified and removed touches. In order to interpret the raw data of the Positions field, the output is normally routed to a Script node.
In the Script you can implement your own logic for interpreting the touches, which gives you the greatest flexibility. However this approach might be a little tedious. Thus if you want to be able to manipulate objects in your scene utilizing multi-touch you can try the NextWindowMultiTouch proto, which already implements some logic for interpreting the touches.
Using the NextWindowMultiTouch proto
In order to understand what the NextWindowMultiTouch proto (attached to this tutorial) does, please first read the generic Multi-Touch tutorial which explains the concept of a UserBody and how the different sensors (e.g. TouchSensor, HypersurfaceSensor) work together.
The NextWindowMultiTouch proto contains an IOSensor and some script logic, which automatically adds, updates or removes UserBody nodes to or from the X3D scene based on the touch infos delivered by the IOSensor. Using it is simple, just add this line to your scene:
Code: Using the NextWindowMultiTouch proto
<NextWindowMultiTouch displayPixelSize="1920 1080" touchReleaseDuration="74" />
The NextWindowMultiTouch proto offers the following fields:
- device :
- A device number starts at and defaults to 0. Unless you are using more than one NextWindow device this should be set to 0.
- positions :
- This output emits an MFVec4f whenever new touches are registered on the multi-touch screen.
- displayPixelSize :
- The resolution of the multi-touch screen (in pixel). Defaults to "1920 1080".
- allBlobsReleased :
- This output emits an SFTime when no touches are left on the screen, i.e. when all fingers have been removed.
- touchReleaseDuration :
- This constant is used for determining when the last finger is removed from the screen. This field is necessary because the NextWindow driver does not signalize the removal of the last finger. Use values greather than 74.
- viewareaName :
- The name of the viewarea to use. Specifiy an empty string (the default) when working with only one viewarea.
- userBodyHot :
- If this field is set to true (the default), the UserBody nodes added to the scene are hot. If false, the UserBody nodes are not hot (hot==false).
With the NextWindowMultiTouch in place you can use a HypersurfaceSensor on subgraphs you want to manipulate. The HypersurfaceSensor reacts to UserBody nodes added by the NextWindowMultiTouch proto and maps multi-touch gestures to the translation, scale and rotations fields. In the following example the box can be translated and scaled using the translation (one finger) and scale (pinch) gesture.
Code: Complete example using a NextWindowMultiTouch proto
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE X3D PUBLIC "ISO//Web3D//DTD X3D 3.0//EN" "http://www.web3d.org/specifications/x3d-3.0.dtd"> <X3D xmlns:xsd='http://www.w3.org/2001/XMLSchema-instance' profile='Full' version='3.0' xsd:noNamespaceSchemaLocation='http://www.web3d.org/specifications/x3d-3.0.xsd'> <Engine desiredFrameRate="60"> <RenderJob DEF='render'> <WindowGroup> <Window DEF='win1' fullScreen="true" drawCursor="true"> <Viewarea DEF="viewarea" /> </Window> </WindowGroup> </RenderJob> </Engine> <Scene DEF='scene'> <NavigationInfo type="none" interactionType="projection" /> <Environment DEF="env" frustumCulling="false" shadowMode="none" syncOnFrameFinish="false" /> <Viewpoint position="0 0 10" zNear="0.9" zFar="1000" /> <ExternProtoDeclare name='NextWindowMultiTouch' url="NextWindowMultiTouch_PROTO.x3d" /> <NextWindowMultiTouch displayPixelSize="1920 1080" touchReleaseDuration="74" /> <Transform> <HypersurfaceSensor DEF="hsSensor" translationOffset="0 0 0" minScale="0.2 0.2 0.2" maxScale="2 2 2" /> <Transform DEF="boxTrans" translation="0 0 5"> <Shape> <Box /> <Appearance><Material DEF="mat" /></Appearance> </Shape> </Transform> </Transform> <ROUTE fromNode="hsSensor" fromField="translation_changed" toNode="boxTrans" toField="translation" /> <ROUTE fromNode="hsSensor" fromField="scale_changed" toNode="boxTrans" toField="scale" /> </Scene> </X3D>
You can use as many HypersurfaceSensor nodes as you want but you only need one NextWindowMultiTouch proto.
Files:
Comments
This tutorial has no comments.
Add a new comment
Due to excessive spamming we have disabled the comment functionality for tutorials. Please use our forum to post any questions.