D3D detailed document by John Kelly D3D Details

Detailed information about Direct3D for Rapid-Q

Good quality 3D applications can be made in RapidQ. Although RapidQ is a byte-coded interpreter, the 3D rendering is done via Microsoft's DirectX 6.0 and Direct3D library. The speed is not greatly affected by the byte-code interpreter, and your program may run quite fast! DirectX and Direct3D is an API (application programming interface) that uses a "COM" format to communicate with the actual code that generates the graphics.  It appears that RapidQ used an older package called DelphiX, which was written of course in Delph (Pascal). All this happens behind the scenes for you. Direct3D Retained Mode should continue to work with versions of DirectX up to version 10 under Vista. Under Vista you need to include the D3DRM.DLL file in the same directory as you application. The file can be downloaded from here:
 If you want RapidQ to do a lot of  3D graphics then you can use OpenGL, or create your own DLL like the IRRLICHT wrapper available for  FreeBasic (www.freebasic.net).

About the D3D interface:
For RapidQ, Direct3D  uses a DirectX screen to do all the drawing that is why it is included all in QDXSCREEN. Direct 3D (D3D) uses "Retained mode" - or D3DRM. Simply this means all information on 3D objects are retained in graphics/rendering memory and is not redrawn for each frame. While this makes programming it a lot simpler than OpenGL (which requires you to draw each object point every time it changes rotation, position, etc.), you really can't change the shape of a single object just any way you want, it keeps is same shape. 

Using D3D is complicated because it uses a " IUnknown interface," which is a COM object. This is NOT the same COM interface used for OLE automation (like Word, Excel, etc). The Microsoft doc calls the interface like this:   “IDirect3DRMObjects"
I = "Interface"  (the API)
Direct3D = you know this
RM = retained mode
Objects = like objects in Rapidq or QDXSCREEN

 For example, if you want to know about the RapidQ mesh builder, William Yu documents that "QD3DMeshBuilder implements “Direct3DRMMeshBuilder" So in the Windows documentation (WIN32_2.HLP file) search for  IDirect3DRMMeshBuilder  and that will be the best information you can get about QD3DmeshBuilder.

Although the documentation is for C++ programming but don't let that stop you.  In C++ coding you separate the Object from the class by the “::”
 Example in C++ :
IDirect3DRMMeshBuilder::AddFace (LPDIRECT3DRMFACE lpD3DRMFace)

In RapidQ it is simpler:  
QD3DmeshBuilder.AddFace(Face AS QD3Dface)

Since Visual Basic uses DirectX9 why can’t RapidQ just use those include files?

In Visual Basic the programmer loads a “table file” with a .tbl extension. This is a binary file that sets up the API calls, CLASSES, procedures, constants, etc to program in Direct 3D. You can look at them with Microsoft visual studio. The COM interface QOLEOBJECT part of RapidQ will never work because it is too complicated. Therefore, Rapidq will not be a 3D clone of visual basic. In fact, I don’t see how RapidQ can improve in 3D without an external 3D engine that can be called by standard DLL. This is a shame because 3D in Rapidq 3D is a LOT easier to learn than C++, and even Visual Basic NET.


STARTING D3D in RapidQ is easier than you think!

RapidQ does a LOT of things for you to program in 3D. When you start with
** NOTE numbers in RapidQ D3D are usually of type DOUBLE !!!

BitCount = 32        '8, 16,24 32 bits/pixel
Use3D = True        'load Direct3D Retained mode Object
UseHardware = 1   'does the hardware acceleration
…. Etc

The program makes a DirectDraw screen and Direct3D device draws to the DirectDraw screen. There are many things that RapidQ does for you in these few lines of code ( this simple code would be about 40 lines of C++ code). Also RapidQ sets a viewport with the IDirect3DRM::CreateViewport (for sizing the view), sets a Clipper object with IDirect3DRM::CreateDeviceFromClipper(), which sets the limits of rendering in 3D space. Sets up a color ramp model and sets the shading for lights, color ramps, the number of texture colors, then adds a camera to be attached to a FRAME (see below), and creates an off-screen buffers (memory buffers that are drawn to before you FLIP them to the DXscreen. All of the 3D objects are attached to a  ROOT FRAME (also called a QD3DFrame) that defines your position and orientations (and others) in your 3D world. For instance, you will see this in RapidQ as DXScreen.SetCameraPosition(X, Y, Z) and DXScreen.SetCameraOrientation(0.35, -0.65, 1.0, -0.15, 1.0, 0.5). RapidQ takes care of the callbacks for you (like WM_PAINT and WM_ACTIVATE). This is the beauty of RapidQ because it is really ugly in C/C++ .

OK so how does RapidQ make 3D graphics?

1) There are a minimum of 3 main 3D objects that must be DIM and then Created 1) a Meshbuilder for holding the 3D data points, 2) a Frame for rotating, sizing, and positioning the 3D objects relative to the camera, and 3) a Light that illuminates the scene. This is discussed in more detail below.


Thinking in 3D- Meshes 

Representing an object in 3D by computers takes a little thought. We know a rock is solid but for 3D graphics the surface is all that matters. A visual object is made up of a set of polygons (usually a triangle) defined by one or more end points (1 = vertex, 2 or more = vertices). For DirectX a vertex is a point in x,y,z dimensions. Three points, or two points and a line, define a polygon, which looks like a flat triangle. Keep adding more triangles to this one, all at different angles and you eventually form a mesh in 3D space that represents the surface of the object. When the object is only defined by vertices, then it of course, will look like a wireframe model. The computer needs to fill in the wireframe with a shaded surface that has a color, bitmap texture, and a reflectance property.

 In Direct3DRM objects can be made up of a mesh. The mesh is a set of vertices that define polygon (triangle) faces. You add FACES to a MESH via MESHBUILDER. The easiest way to do this is load a X model (e.g., MeshBuilder.Load("egg.x") but more on this later) The “normal” of each faces is the direction in x,y,z that is perpendicular to the face (also called orthogonal or 90 degrees from the face). You should see why this is important for lighting and reflectance. Changing a vertex or normal that is used by several faces will change the appearance of all faces connected to it. The vertices can also be used to define two-dimensional coordinates within a texture map.



  QD3DMeshBuilder ( IDirect3DRMMeshBuilder)

QD3DMESHBUILDER will make up a visual object defined by a set of polygon faces with vertices and normals. You can think of the flat polygons as “faces.” They are on the surface of the object. You make up each face with QD3Dface (IDirect3DRMFace), which contains the vertices. Keep adding the FACES to MESHBUILDER until you make up the whole mesh of the object. MESHBUILDER will calculate the normals of each face for you. In RapidQ, you can only set the color of the whole face (normally you could set the color of each vertex, set a texture to those vertices, and reflectance(e.g., a shiny versus a dull material for a single face or all faces in a mesh by using the SetColor, SetTexture, and SetMaterial functions. RapidQ only supports SetRGBcolor. Textures are discussed below.

QD3DmeshBuilder the heart of RapidQ 3D objects

William Wu set up QD3DMESHBUILDER (IDirect3DRMMeshBuilder) to do all of the work with 3D objects. QD3DMESHBUILDER will load and maintain your meshes. You can individually add VERTICES, FACES, and .X model Meshes (a large collection of vertices, etc)  to your MESHBUILDER

QDXScreen.CreateFace(Face AS QD3Dface)    ' create an empty face 
QD3DMeshBuilder.CreateFace(Face AS QD3Dface)    'also this to create an empty face , untested?
QD3DmeshBuilder.AddVertex (X#, Y#, Z#)
QD3DmeshBuilder.AddFace (Face AS QD3DFace)
QD3DmeshBuilder.Load (S AS String)  'load a .x model file defining a mesh of vertices.

So to create a mesh yourself by adding each face (a polygon defined by vertices) to the mesh individually. Your code might look like this:

First you make the face, then add data points to it (this must be done at least two times on the same face), then you add the face to the mesh via MeshBuilder. If you do it yourself, you need to add each vertex to a face in clockwise order if you want the normal to come out towards you as you look at the face. The opposite order makes the normal point away from you AND WILL NOT BE RENDERED UNLESS VIEWED FROM BEHIND, because of “face culling.” This makes the render faster by not showing both sides. I don’t know how to turn CULLING off… ** NOTE This process can be really slow. It might help to DIM many QD3DMeshBuilders and load about 50 faces max for each one. ***

Obviously unless you are making an object yourself (like a terrain), making faces is difficult, the easiest thing to do is load an .X file with

The file already has a mesh made up of faces in 3-D. Wu mentions making meshes with Anim8tor, but my favorite is Open FX (www.openfx.org) – a free and very powerful modeler. Then you convert these files (3DS, etc) to a .x file:

CONV3ds –m My_3DS.3ds                --- be sure to add the –m
Or use another .x file compatible program. 


QD3DMesh    (IDirect3DRMMesh)

The QD3DMesh object really doesn't work in RapidQ. It must have been left unfinished. So this statement has no real purpose:
QD3DmeshBuilder.CreateMesh (Mesh AS QD3DMesh) 

 Unfortunately, RapidQ did not build upon QD3DMESH, which normally assigns characteristics such as materials, textures and grouping of vertices.

Changing 3D objects - Frames
A Frame simply put is a 4 x 4 matrix. That is not complicated, it is just 4 x 4 = 16 numbers. But the order of these numbers is crucial. All you need to know is that a Frame lets you   SCALE,   MOVE, and  ROTATE your 3D object that you just made with a MeshBuilder. In RapidQ you use QD3DFrame  (the  IDirect3DRMFrame). Basically the vertices of the mesh are multiplied by the 4 x 4 matrix and out comes the newly changed 3D object.


After 3D objects in your scene are made (vertices, faces, and meshes in a MeshBuilder, you need to create one or more lights. The light is also a 3D object.  Next you need to create some FRAMEs for your whole scene. All 3D shapes, and even lights, must be stored in a frame before it can be added to the world and manipulated. The camera has a frame too, but RapidQ does not let you work with a Camera frame. It is hidden from you.  A FRAME is a “frame of reference.” Frames are located in space, and they can have a rotation, a velocity, and they can be moved, or "transformed". QD3DFRAME  provides a frame of reference that objects can be placed in. Visual objects are placed in a scene by taking their positions and orientations from the QD3DFRAME. So, if you have a set of 3D objects (visuals) that you want to move together around a scene, you would call IDirect3DRMFrame::AddVisual to add the visuals to the frame. There is a parent frame that defines the whole scene. All the other frames are “children” to this frame. If you move a parent frame, then all the child frames move with it. For instance if the parent frame moves then the 3D Objects, lights, and camera move with it. Alternatively, you move the object itself by just moving it’s frame. You create the frames before you can use them. When you create a frame, this apparently loads the API into memory, allocates resources, etc. So your initialization should include some important calls 

QDXSCREEN.OnInitialization = DXInitialize(Sender AS QDXScreen).
SUB DXInitialize
DXScreen.CreateFrame(MeshFrame AS QD3DFRAME)  ‘probably the parent frame
DXScreen.CreateFrame(LightFrame AS QD3DFRAME)
DXScreen.CreateMeshBuilder(MeshBuilder AS QD3DMeshBuilder)  
DXScreen.CreateLightRGB(D3DRMLIGHT_DIRECTIONAL, 0.9, 0.9, 0.9, QD3Dlight)

‘ There is no CreateFrame for the camera -- RapidQ does it for you ….

The FRAMES are sorted into a hierarchy. RapidQ takes care of this for you. You can make a “child frame” relative to another Frame so that the second 3D object will move with the parent frame:


 Frames added to the parent frame are connected to it and also move along with it when you move the parent frame. In the above example ChildFrame.Rotate(x,y,z,a) will do nothing you must do ParentFrame.Rotate(x,y,z,a) and both move together.

Remember a QD3DFRAME  is not really visible, it just does the moving translation, orienting for you. To render a mesh, you could load (or create) a mesh, then create a QD3DFRAME and then add the MeshBuilder to the frame as a visual object with AddVisual():
QD3Dframe.AddVisual (Visual AS QD3DMeshBuilder/QD3DMesh)

One QD3DMeshBuilder can be added to multiple frames to create more than one of the same mesh.

Now we increase the flow of operation (order not important?)

 1) create a frame of reference


2) make a light


3) make a 3D object


(repeat for more objects)

 4) Orient your camera’s viewpoint

DXscreen.SetCameraPosition SUB (X#, Y#, Z#)
DXscreen.SetCameraOrientation SUB (DX#, DY#, DZ#, UX#, UY#, UZ#)

In IDirect3DRM the FRAME can have a velocity with a SetVelocity function (which actually is implemented in RapidQ! But we need to figure out how to use it) and frame rotation is set with a SetRotation function. The frame's objects will move to their new locations and will rotate correspondingly before each scene is rendered. If you do not want your objects to move or rotate, then these values should be set to zero (the default value). A frame will keep on rotating and moving as long as these values are nonzero. You call QD3Dframe.AddVisual to add a frame as a visual to another frame. This way, you can use a given hierarchy of frames many different times throughout a scene.


Moving 3D objects in Space

The default the Origin is at the bottom left corner of your screen. In 3D space the positive x-axis points to the right and the positive y-axis points up. Z gets larger as you move away from the viewer, into the screen (see the figure above).

Moving a 3D object in space is a translation by changing POSITION.

Turning a 3D object is a ROTATION or ORIENTATION

Changing the size of a 3D object is a SCALE

You can move, or rotate by the FRAME that the 3Dobject is added to:

QD3Dframe.SetPosition (X#, Y#, Z#)
QD3Dframe.SetOrientation (DX#, DY#, DZ#, UX#, UY#, UZ#)
QD3Dframe.SetRotation (X#, Y#, Z#, Theta#)
QD3Dframe.AddScale (CombineType%, X#, Y#, Z#)

Because a light is added to a frame you can move the light around too by its frame.

Or move the position of the Object itself with

QD3DmeshBuilder.Translate (TX#, TY#, TZ#)

(normally there are many functions in IDirect3DMMeshBuilder that are not supported by RapidQ)


Textures in Retained Mode

You fill in the faces of a mesh with a texture to make it look realistic. A texture is simply a bitmap with dimensions 2^x (ie. 256 x 256 pixels). With a texture, you can map patterns, such as bitmap pictures, onto the surfaces of objects, such as faces and meshes. You can add a texture to a mesh with QD3DmeshBuilder.SetTexture. RapidQ uses QD3DMeshBuilder (or QDXScreen and QD3DTexture) to load up a texture The texture used can be a 2D image, either in the .bmp format or the .ppm format. It should be kept in mind that the image data in a bitmap (.bmp) file is upside down, while a .ppm file is correct side up. To actually map the texture to the mesh in the right sizing, you need to use Q3DRMWrap object to specify the wrapping function. You then use the QDXScreen.CreateWrap to assign texture coordinates to faces and/or meshes. You can move your textures by changing the U,V or scaling coordinates!  Ok, here they are
D3DRMWRAP_SPHERE, _        ' const defines how to do texture mapping
CenterX, CenterY, CenterZ, _      ' wrap origin of texture coordinate in the model
0, 1, 0, _                                       ' the z-axis vector, determines orientation of bitmap on model
0, 0, 1, _                                       'and y-axis vector orientation, numbers are -1 to +1
bmpX, bmpY, _                            'start location in the bitmap
scaleX, scaleY, Wrap)                   'scale the bitmap , and put in the QD3DWrap variable


Texture Wrapping in Retained Mode

Used to calculate the texture coordinates for an object. To create a wrapping function for an object, we need to specify the type of wrapping used, the reference frame and origin, the direction vector, the up vector, a pair of scaling factors and the origin for the texture coordinates. The wrapping function determines how the rasterizer module interprets the texture coordinates.

The different types of wrapping functions that can be specifie are:

- 2D image is mapped to a 2D object

- the object is placed inside a hollow cylinder with the texture on the innder side of the cylinder. The cylinder is then collapsed onto the object

- the object is placed inside a hollow sphere with the texture on the innder side of the sphere. The sphere is then collapsed onto the object

- also called environment mapping. It is similar to spherical mapping. Here the reflected ray of light is used to select the texture to be specified at a point on the object. If used on a .x model with Vertex Normals, the object will look like it is made of chrome!


Lighting and Shadows in Retained Mode

IDirectD3DRM supports five types of light: directional, ambient, point, point-parallel and spotlight. Lights are used to illuminate the 3D objects based on the mesh’s orientation to the light sources in the scene. An AMBIENT light illuminates an entire scene but creates a very flat look. A POINT light emanates in all directions from one location. A DIRECTIONAL light come from one direction, but has no set origin. A SPOT light  produces light in the shape of cone. Lights can be combined for rich illumination of the shape. First you must attach lights to a frame in order for it to illuminate 3D objects in a scene. Because the frame provides both orientation and position for the light, you can move and redirect a light simply by moving and reorienting the frame the light is attached to.

 The vertex normals are used when lighting in Flat, Gouraud and Phong (going from worse to better but slower) shading models to give a smooth look to a polygonal object. RapidQ does rendering without allowing you special operations to generate or change normals.



Represents a light source in a scene. A light source on creation, has to be added to a frame. A light source can be any one of the following types:

- the amount of light present at each point in the scene

- light rays have a direction and are parallel. Light is considered to be placed at an infinite distance. Can be used to model a Sun-like light source

Parallel Point  
- similar to Directional light, but light is placed at specified point

- light is placed at specified point. Light emits equally in all directions

- light is placed at specified point. Light is emitted as a cone, with the apex being at the specified point


Shadows are created with the IDirect3DRM::CreateShadow function. Before creating and using a shadow, you will first want to define a light source to produce light. After you have added a light source to your scene the shadow is handled by the IDirect3DRM::CreateShadow function.

There are many more options in D3D that RapidQ does not use. Major limitations are no support for 3D animation, transparent texture faces (called Decals or billboards), progressive meshes, custom vertex shaders, and the ability to morph your vertices in real time. 

Prev Chapter Contents Next Chapter