|
| 1 | +This is a rudimentary pinhole lightfield demo. |
| 2 | + |
| 3 | +--- BUILD --- BUILD --- BUILD --- BUILD --- BUILD --- BUILD --- BUILD --- BUILD --- BUILD --- BUILD --- |
| 4 | + |
| 5 | +Build freeglut-2.8.1 |
| 6 | +4 Build lua-5.2.3 |
| 7 | +5 Build vrpn-07.33 |
| 8 | +6 |
| 9 | + |
| 10 | +7 Run vs2012\MAKE_BUILD.txt |
| 11 | +8 |
| 12 | + |
| 13 | +9 or |
| 14 | +10 |
| 15 | + |
| 16 | +11 Load solution vs2012\000.sln into Visual Studio 2012 |
| 17 | +12 Build Debug and Release configurations |
| 18 | + |
| 19 | +--- TODO --- TODO --- TODO --- TODO --- TODO --- TODO --- TODO --- TODO --- TODO --- TODO --- TODO --- |
| 20 | + |
| 21 | +efdy user guide |
| 22 | + |
| 23 | +vrpn : add internal server for standalone operation |
| 24 | + |
| 25 | +update README.md |
| 26 | + |
| 27 | +create PPT slides of: |
| 28 | +- latest LFD projection diagram |
| 29 | +- configuration of QPI demo array |
| 30 | + |
| 31 | +The 1st demo is not to impress; it is to educate and inspire discussion. |
| 32 | +Once educated, then the 2nd demo can impress. |
| 33 | + |
| 34 | +TableViewer |
| 35 | + |
| 36 | +rename Viewer to "Panel"? |
| 37 | + |
| 38 | +EventHandler to be generic, not use GLUT |
| 39 | + |
| 40 | +filesystem & renaming tweaks |
| 41 | + |
| 42 | +frame out CalibrationScene |
| 43 | + |
| 44 | +Pose location to implicitly define Viewport location |
| 45 | +(Viewport size/orientation is separate) |
| 46 | + |
| 47 | +make DodecaScene be more generic wireframe geometry |
| 48 | + |
| 49 | +post screengrabs of hogels to website. |
| 50 | + |
| 51 | +block diagrams: |
| 52 | +- single compute node |
| 53 | +- whole system |
| 54 | +- proposed ORE |
| 55 | +- double frustum v. slice-n-dice |
| 56 | + |
| 57 | +fix CRLF in MAKE_GIT_DESCRIBE |
| 58 | + |
| 59 | +look at pinouts for LCD backlight |
| 60 | + |
| 61 | + |
| 62 | +--- NOTES --- NOTES --- NOTES --- NOTES --- NOTES --- NOTES --- NOTES --- NOTES --- NOTES --- |
| 63 | + |
| 64 | +///////////////////////////////////////////////////////////////////////////// |
| 65 | + |
| 66 | +"ORE" = Objective Rendering Engine |
| 67 | +A hardware architecture, distinct from a GPU, optimized for rendering hogels. |
| 68 | +- parallel rendering of an array of Cameras from a single dispatch. |
| 69 | +- rendering each view using Pinhole Projection, rather than 3 separate draw |
| 70 | + passes (fwd perspect, ortho epsilon, rev perspect) |
| 71 | +- ORE embedded in the photonics/optics package means Camera can be calibrated |
| 72 | + once at manufacturing, with those parameters flashed into ROM. |
| 73 | + |
| 74 | +grep for the word "NUGGET" to find essential architectural concepts. |
| 75 | + |
| 76 | +"DBU" = "Database Unit", the dimensionless units of OpenGL vertices. |
| 77 | + |
| 78 | +"hogel" = holographic element (like pixel = picture element) |
| 79 | +encodes color/intensity per direction. |
| 80 | + |
| 81 | +///////////////////////////////////////////////////////////////////////////// |
| 82 | +Per-platform compile-time defines: |
| 83 | +-DLFD_PLATFORM_MSVC=1 // Microsoft Visual Studio |
| 84 | +-DLFD_PLATFORM_LINUX=1 // GCC on Linux |
| 85 | +-DLFD_PLATFORM_ANDROID=1 |
| 86 | + |
| 87 | +///////////////////////////////////////////////////////////////////////////// |
| 88 | +// Simplifying assumptions regarding assignment of attributes between |
| 89 | +// Viewer and Cameras. (In the future, some attributes could migrate |
| 90 | +// from shared values in the Viewer to per-Camera values): |
| 91 | +// |
| 92 | +// 1) Camera FOV in X and Y directions is equal. |
| 93 | +// 2) Camera FOV for near and far volumes is equal. |
| 94 | +// 3) Camera size in X and Y directions is equal (ie aspect=1). |
| 95 | +// 4) FOV for all Cameras in a Viewer is the same. |
| 96 | +// 5) Size for all Cameras in a Viewer is the same. |
| 97 | +// 6) zNear and zFar for all Camera in a Viewer are the same. |
| 98 | +// 7) Camera's image area (ie viewport) is circular. |
| 99 | +// 8) Position of Camera is the center of its image area (ie viewport). |
| 100 | +// 9) defn: zFar > zNear |
| 101 | +// 10) defn: zFarEpsilon > 0 |
| 102 | +// 11) defn: zNearEpsilon == -zFarEpsilon |
| 103 | +// 12) Rather than partition the glDepthRange, could instead |
| 104 | +// glClear(GL_DEPTH_BUFFER_BIT) between passes. |
| 105 | +// 13) "Display Space" is the coordinate system of physical dimensions [mm] |
| 106 | +// 14) defn: NearVolume is in -Z of Display Space, FarVolume is in +Z. |
| 107 | +// 15) EpsilonVolume straddles -Z and +Z of Display Space, centered on 0. |
| 108 | +// 16) defn: 0.0 <= _dNearEpsilon <= _dFarEpsilon <= 1.0 |
| 109 | +// 17) Volumes are defined in Display Space. |
| 110 | +// 18) Viewer has ultimate ownership of SCISSOR_TEST and STENCIL_TEST |
| 111 | + |
| 112 | +///////////////////////////////////////////////////////////////////////////// |
| 113 | +From the SPIE Proceedings Abstracts, page 8 |
| 114 | +http://spie.org/Documents/ConferencesExhibitions/EI15-Abstracts.pdf |
| 115 | + |
| 116 | +"Small form factor full parallax tiled light field display", Alpaslan et al. |
| 117 | +Ostendo's QPI technology is an emissive display technology with |
| 118 | +- Each tile |
| 119 | + - 10 m pixels = 0.01mm |
| 120 | + - 1000x800 pixels = 0.8 Mpix |
| 121 | + - 20x16 = 320 hogels/tile |
| 122 | + - 50x50 = 2500 pixels/hogel @ 0.5mm |
| 123 | +- Tiled display |
| 124 | + - 4x2 array, 6.4 Mpix |
| 125 | + - approx 48x17x2mm w/ small gaps. |
| 126 | + - 80x32 = 2560 hogels |
| 127 | +- 120 mm depth of field, 30 degree @ 60 Hz refresh rate. |
| 128 | +///////////////////////////////////////////////////////////////////////////// |
| 129 | +LCD panel |
| 130 | + |
| 131 | +Adafruit 1931 |
| 132 | +https://www.adafruit.com/products/1931 |
| 133 | +7" Display 1280x800 IPS - HDMI/VGA/NTSC/PAL |
| 134 | + |
| 135 | + Resolution: 1280 x 800 |
| 136 | + Dimensions of Screen: 105mm x 160mm x 3mm / 4.1" x 6.3" x 0.1" |
| 137 | + Visible area: 150mm x 95mm (16:10) |
| 138 | + Display dimensions: 162mm x 104mm x 4mm (6.4" x 4.1" x 0.2") |
| 139 | + Uses an HSD070PWW1 display |
| 140 | + |
| 141 | +from HSD070PWW1-B00.pdf: |
| 142 | + Outline Dimension 161.2(Typ) x105.5 (Typ) mm |
| 143 | + Display area 150.72 (H) x 94.2(V) mm |
| 144 | + Number of Pixel 1280 RGB (H) x 800(V) pixels |
| 145 | + Pixel pitch 0.11775(H) x 0.11775(V) mm |
| 146 | + |
| 147 | + Mechanical dims: Min Typ Max |
| 148 | + Horizontal (H) 160.9 161.2 161.5 mm |
| 149 | + Vertical (V) 105.2 105.5 105.8 mm |
| 150 | + Depth (D) w/o PCB - 2.35 2.65 mm |
| 151 | + Depth (D) w/ PCB - 4.2 4.5 mm |
| 152 | +///////////////////////////////////////////////////////////////////////////// |
| 153 | +Inforce 6410 |
| 154 | +http://www.inforcelive.com/index.php?route=product/product&product_id=53 |
| 155 | + |
| 156 | + Snapdragon 600 |
| 157 | + Krait CPU, 4-core, 1.7 GHz, 2MB L2 cache |
| 158 | + Adreno A320 GPU |
| 159 | +///////////////////////////////////////////////////////////////////////////// |
| 160 | +Looking ahead to something like OVR_multiview... |
| 161 | + |
| 162 | +1) render each Camera to a layer of a texture array. |
| 163 | + OVR_multiview makes this efficient. |
| 164 | + but can it handle 500-1000 views and texture layers?? |
| 165 | +2) each Camera uses entire texture; no scissor while rendering. |
| 166 | + texture can leverage multisampling if desired. |
| 167 | +3) composite the Camera textures into a framebuffer using textured quads. |
| 168 | + this is essentially point sprites (need quad size and pose) |
| 169 | +4) apply stencil while compositing? |
| 170 | +5) various hogel corrections can be done during sprite composition: |
| 171 | + spatial offsets: translation in sprite location. |
| 172 | + aspect!=1: nonuniform scale of quad. |
| 173 | + chromatic: render each color layer w/ different sprite scale. |
| 174 | +6) Camera textures dont use viewport coords to be composited; |
| 175 | + all compositing done natively in Display space. |
| 176 | + |
| 177 | +///////////////////////////////////////////////////////////////////////////// |
| 178 | + |
| 179 | +A "Pose" is defined as being in Display space. |
| 180 | +It represents the position and orientation (ie XYZHPR) of an entity in |
| 181 | +physical dimensions. |
| 182 | + |
| 183 | +Viewers and Cameras in physical space are specified by a pose. |
| 184 | + |
| 185 | +Viewers have a pose for specific limited reasons: |
| 186 | +1) to determine which Camera belong to them. |
| 187 | +2) to position the Camera within the Viewer. |
| 188 | + |
| 189 | +In the current implementation, we assume all entities are coplanar, |
| 190 | +and so their pose can be fully expressed using only X and Y 2D translation. |
| 191 | +Non-coplanar (ie Z != 0, 3D translation) and non-parallel (rotation, differing |
| 192 | +orientation) will come in the future if needed. |
| 193 | + |
| 194 | +///////////////////////////////////////////////////////////////////////////// |
| 195 | + |
| 196 | +There are different mechanisms possible for a Viewer implementation. |
| 197 | + |
| 198 | +The current lefdy Viewer (circa Apr 2015) is a "direct viewport" renderer: |
| 199 | +each Camera is rendered directly into the output framebuffer was separate |
| 200 | +OpenGL viewports. Limitations: discrete resolution defined by viewport rect, |
| 201 | +requires scissor and stencil. |
| 202 | + |
| 203 | +Another implementation is a "texture compositing" renderer: render each |
| 204 | +Camera to a target texture, then composite the textures into the output |
| 205 | +framebuffer using a textured quad or point sprite per Camera. The texture |
| 206 | +dimensions are decoupled from the viewport pixel coordinates, and can support |
| 207 | +additional manipulations such as scaling, oblique orientation, chromatic |
| 208 | +compensation, shader-computation during composition, stenciling via non-quad |
| 209 | +geometry, etc. |
| 210 | +This approach also maps well to the OVR_multiview extension. |
| 211 | + |
| 212 | +///////////////////////////////////////////////////////////////////////////// |
| 213 | + |
0 commit comments