The ARDL uses the essential skills that the students enjoyed from the VRDL, but provides them with a completely unique set of tools unlike any other educational software. The ARDL […]


The ARDL has a wide variety of uses, as it clearly can demonstrate spatial concepts, temporal concepts, and contextual relationships between both real and virtual objects. Digital Tech Frontier has […]


>> What Is It? The ARDL is a revolutionary concept that makes virtual, 3D objects appear in the real world, attached to real objects. Users look through a Virtual Reality […]

Post image for EAST Initiative 2012 Conference Winners

EAST Initiative 2012 Conference Winners

by admin on April 6, 2012

EAST Initiative

2012 Conference Winners

Virtual Reality Competition VRDL

Winner:  Mountain View High School – Allyn Irvin, Hennelly Irvin, Alexis Waters, Jamey Ragland,

Bryan Davis, Kyle Buenrostro, Aaron White, Andrew Wilson

             Malvern High School – Brandon Crow, Dalton Hyde, Chase Norwood

             Roberts Elementary School – Rebecca Brogdon, Emma Choate, Benjamin Brandt, Jacob Collins, Evan Hankins, EmilyDavault, Evelyn Perez, Dawson Cloud, Bryson Decker, Greyson Froberg, Jenny Zhang

Augmented Reality Competition ARDL

Winner:  Valley View High School – Ania Welman

             Sylvan Hills High School – Amber Wagner, Maddy White

Highlights from the Environmental and Spatial Technology Initiative Conference, which was held March 14-16, 2012 at the Hot Springs Convention Center, can be viewed at Included in the video clip are interviews with Matt Dozier, the president and CEO of EAST, students from around the state, and Dr. David Rainey, the superintendent of the Dumas School District.

Comments on this entry are closed.

Previous post:

How the ARDL Works?



A web camera is focused on a scene that contains camera tracking markers (essentially black and white squares that the software can recognize).  Each person then looks through a handheld display system called a Virtual Reality POV or at a monitor.  What the person sees is exactly what the camera sees, except that the software replaces or enhances the camera tracking markers with virtual, 3D images.  The person can then interact with those 3D images by manipulating the markers.  The person’s interaction and manipulation of the markers can also trigger narrations and sound effects.  Markers can be a variety of sizes and attached to almost any flat surface, which means they can be used in many interesting and effective ways.