Skip to content

Commit d3d7918

Browse files
committed
fixed some margins and spacing error
1 parent 4be516d commit d3d7918

File tree

1 file changed

+13
-14
lines changed

1 file changed

+13
-14
lines changed

index.html

Lines changed: 13 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -346,12 +346,12 @@ <h2 class="title is-3">Segmentation Examples</h2>
346346
<div class="columns is-centered has-text-centered">
347347
<div class="column is-four-fifths">
348348
<div class="publication-image">
349-
<img src="./static/images/dataset_masks_subset.png" alt="EgoNRG Dataset Gestures" style="width: 100%; max-width: 800px;">
349+
<img src="./static/images/dataset_masks_subset.png" alt="EgoNRG Dataset Gestures" style="width: 100%;">
350350
</div>
351351
</div>
352352
</div>
353353
<div class="columns is-centered has-text-justified">
354-
<p style="margin-top: 30px">
354+
<p style="margin-top: 20px">
355355
These are examples of the segmentation masks that were annotated for the EgoNRG dataset.
356356
You can seee the segmentation masks created for the joint hand-arm for both the left and right limbs.
357357
You can also see how the dataset has varying clothing conditions, light conditions, and background people visible.
@@ -383,7 +383,7 @@ <h2 class="title is-3">Gesture Classes</h2>
383383
<div class="columns is-centered has-text-centered">
384384
<div class="column is-four-fifths">
385385
<div class="publication-image">
386-
<img src="./static/images/gestures_overview_static.png" alt="EgoNRG Dataset Gestures" style="width: 100%; max-width: 800px;">
386+
<img src="./static/images/gestures_overview_static.png" alt="EgoNRG Dataset Gestures" style="width: 100%;">
387387
</div>
388388
</div>
389389
</div>
@@ -396,7 +396,7 @@ <h2 class="title is-3">Gesture Classes</h2>
396396
<source src="./static/videos/gif_gesture_examples.mp4"
397397
type="video/mp4">
398398
</video>
399-
<p style="margin-top: 30px">
399+
<p style="margin-top: 0px">
400400
Above is a video showing the 12 gestures being performed from the third-person viewpoint. This is just for reference.
401401
All gestures in the dataset were captured using the first-person point of view.
402402
</p>
@@ -438,16 +438,15 @@ <h2 class="title is-3">Example Videos</h2>
438438
<div class="content">
439439
<h2 class="title is-3">Example Viewpoints</h2>
440440
<div class="columns is-centered has-text-centered">
441-
<p style="margin-top: -60px"></p>
442-
<video id="dollyzoom" autoplay controls muted loop playsinline height="100%">
443-
<source src="./static/videos/combined_video_example_cropped.mp4"
444-
type="video/mp4">
445-
</video>
446-
<p style="margin-top: 0px">
447-
This video shows all four viewpoints that were captured in the dataset for each gesture performed by the participants.
448-
You can see that in each viewpoint, the part of the participants hand that is visible is different across viewpoints,
449-
hence providing more information for training models that can generalize to other egocentric vision platforms.
450-
</p>
441+
<video id="dollyzoom" autoplay controls muted loop playsinline height="100%">
442+
<source src="./static/videos/combined_video_example_cropped.mp4"
443+
type="video/mp4">
444+
</video>
445+
<p style="margin-top: 0px">
446+
This video shows all four viewpoints that were captured in the dataset for each gesture performed by the participants.
447+
You can see that in each viewpoint, the part of the participants hand that is visible is different across viewpoints,
448+
hence providing more information for training models that can generalize to other egocentric vision platforms.
449+
</p>
451450
</div>
452451
</div>
453452
</div>

0 commit comments

Comments
 (0)