How do I use PiGallery2 in 2021? #292
Pinned
bpatrik
started this conversation in
Show and tell
Replies: 1 comment 2 replies
-
Any updates to your workflow? Lots of features have changed since then. Perhaps the time is route for a 2023 update :) |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
How do I use PiGallery2
I figured, I write a blogish entry about how I (the developer behind it) use Pigallery2. What is my workflow, how do I see its future.
My setup
System: Raspberry Pi 4 4G Model B, SandisK Mobile Ultra 32Gb CLass10, UHS-I, HDD: Western Digital Elements 1TB (WDBUZG0010BBK)
OS: Raspbian GNU/Linux 10 (buster)
I own a raspberry with this WD HDD since 2013 and I run one of my custom made gallery on it (currently pigallery2. Funfact: this is the 4th version actually, the first 2 did not made it to github).
I use the default docker-compose setup with the nightly-buster image (Earlier I had random file system errors when the host OS and the docker image version was different). I keep all setting on default (except I use mapbox, instead of OpenStreetMaps, it looks better). In others words: the default settings are optimized for RPi4.
My workflow
Taking the photos
I like photography. I own a Canon 77D and a gopro. I always took photos in raw (I like to underexpose them a bit then later retouch them. I found that this way the photo has a bit more details).
Retouching the photos
A retouch all of my photos. I use Adobe Lightroom for this (I was considering to switch to an open source app, but I was lazy). Without a retouch no photos can end up in my photo gallery. I also do not show my photos to anyone before I would retouch them. (I’m strict about it, not as much as I was a few years ago, but still quite strict :) ).
* Once I finished retouching them, I annotate them:
* I assign keywords,
* tag faces. Ligthroom mediumly helpful with this. At some point I would like to implement face detection for pigallery2 #57. Until that is ready, I do it manually, so by the time it is ready, the ML model would already have some samples to classify the faces too.
* set geolocation. I usually track when I go for a hike or walk and I use the GPS track to set the location of a photo. Luckily Lightroom can do that for me automatically.
Placing the photos
Once retouching is finished, photos go to their own folders. I use the following structure:
I keep 3 replicas about all my photos: on my personal computer (I move it a lot, It can just die anytime), on my RPi4 server (for pigallery2), on an offline HDD (I happen to own a spare one, so just in case a backup my data there too). It is actually I good practice to physically separate the backups.
So far I my gallery is 170Gig with more than 60K photos in it.
Making Pigallery to recognize the new photos
Once I copied the photos to my RPi, I let pigallery to index it (I use sqlite. It proved to be fast enough for me). If its only one small album, I just navigate there in the app, so it can recognize the new folder. If I add multiple new folders, I hit reindex in the settings. Reindexing takes a lot of time unfortunately, as it rereads all the photos from the HDD. I’m considering to implement a “quick reindexing” that would only reindex if the modified date of a folder changes.
I do the reindex with “jobs” so it also generates thumbnails, full HD preview images, converts videos and cleans up the temp folder. I do schedules jobs to do reindexing periodically. Once it can be triggered on folder change and it can do “quick reindexing”, I will probably give it a try. Until then I like to be in charge, when my RPi does anything ( I do not trust it :D).
If the temp folders needs to be recreated (temp photo name generation changed, or I got a new RPi that can handle videos with higher resolution), I use my own computer to run the app and generate all converted photos, videos. (converting one video on an RPi is ok, but multiple gig of videos are :O ).
The filenames and path in the temp folder only depends on the path and filename of the original file and quality (like: resolution) of the conversion. So it is safe to do the heavy lifting on a stronger machine and just copy the result to the RPi.
Using Pigallery2.
At this point I have all my nicely edited photos in place, the app indexed them, created thumbnails, previews and converted the videos.
I have the following use-cases where I use the app:
Show the photos of an event: This is the most basic usage. I navigate to a folder and just show photos.
Keep track, where I have already been: I go the Faces -> click on my name then click on the map. (since the latest map cluster updates, finally it works smoothly. Migrate from yaga-map to @asymmetrik/ngx-leaflet #256)
Grandma mode
: My grandma loves watching old family photos. (Since the Advanced Search [Feature] Advanced search #58 is finally ready ), I can search for queries like:2-of:(person:”name1” person:”name2” person:”name3” person:”name4”)
. This would show all family photos that contains at least 2 persons from the listed 4. I plan to implement “logical albums” [Suggestion ] Create a list of tags to create logical albums #45 which would be saved search queries placed in the gallery structure. Or some similar feature, where I can save a family photo search query and just load them with one click. With the angular 11 upgrade Upgrade to latest angular version and update translation #255 , pigallery finally works on our older smart tv, so my family can enjoy the photos on a big screen.Get family photos from the last year (List and download them).
Future plans, nice to have features
I have plenty of ideas about the future of the app.
Disclaimer: No promises here. It’s only a hobby project. Ever since I started working for real, I cannot look at code at nights. So only weekends are left. And I expect them to get busier once Covid is somewhat over. Thar is why I started inviting people to contribute and started writing CONTRIBUTING.md. I’m still cautious is this, we are talking about “my precious” here. :) I spent years on it. I’m particularly picky about the design and performance. My aim is to provide high quality experience (like big tech company high quality) but only using a Raspberry. And the UI should be “mom safe”. (The average user should find everything intuitive). In other words: user experience > features.
So about the ideas:
.
), but in big galleries (50k+ photos) both the backend and fronted will be slow.fileExist
check)Furthermore, not sure, how a browser would behave If the app wants to show 50k+ photos on one html page.
README.md
files in the folders and would render them once you open a folder in the app. In this case, we could add some notes, storiette to the individual folders (like memories about your holidays)Beta Was this translation helpful? Give feedback.
All reactions