I haven’t yet been successful in getting media files to load correctly in Gramps web. I’m using Docker and I created a volume to persist media files. Gramps for desktop was not able to load media files to the server, so I copied them manually. I noticed upon inspecting the database with Adminer that all media references are using my desktop path, which obviously doesn’t work on my server; I tried setting the paths to “relative” on the desktop version but it didn’t change the paths in the web sqlite database. So I manually updated the database records to have just the filename without the Windows path, however media files are still not loading. I tried making an API request with postman to inspect media files, and in the API response I see that they are still trying to use the Windows desktop paths. How can I get changes to the database to be reflected in the API? And are there any error logs in the docker environment that I can inspect to see what exactly is happening?
Documentation: Import data - Gramps Web
See “import media files”.
I just tried zipping them and uploading the zip file via the “Import Media Files” button, but I get a 413 error (not surprising, the zip file is 263 MB!). How can I temporarily set the flask app to allow a large upload? Can I set an environment variable to override the flask app’s MAX_CONTENT_SIZE
configuration?
Flask doesn’t limit the upload size. It depends on the setup you use, which you haven’t shared. Often, it’s nginx that imposes a limit. The example in the documentation has a 500 MB limit (web/examples/docker-compose-letsencrypt/nginx_proxy.conf at main · gramps-project/web · GitHub). If you haven’t set that, the nginx default is 1 MB…
I have succeeded in raising the limit in the nginx configuration (I’ll make a note here just in case anyone else happens to use Plesk: when setting “docker proxy rules” for a domain or subdomain, you can still set configurations in the Apache/nginx configuration area and they will take effect alongside the docker proxy which is actually using nginx to proxy the requests).
I have successfully uploaded the media zip file this way, however I am still not seeing the image files in the “ancestor tree” view. I checked the app/media
volume and it is empty, so the zip file was not unzipped to this location. Are there any more steps that need to be taken after uploading the zip file with the Import -> Import Media Files
option? I have tried force refreshing the page but to no effect.
I also entered the docker container and searched for any zip files but found none. I have tried, from the root directory:
root@e8a0bf143eaf:/# find . -print | grep -i '.*[.]zip'
root@e8a0bf143eaf:/# find . -name '*.zip'
root@e8a0bf143eaf:/# ls -R | grep '.*[.]zip'
but no results.
I have tried the same searching for [.]jpg
instead of zip, and it only finds the example jpegs:
./usr/share/doc/gramps/example/gramps/E_W_Dahlgren.jpg
./usr/share/doc/gramps/example/gramps/O5.jpg
./usr/share/doc/gramps/example/gramps/O3.jpg
./usr/share/doc/gramps/example/gramps/Alimehemet.jpg
./usr/share/doc/gramps/example/gramps/O4.jpg
./usr/share/doc/gramps/example/gramps/1897_expeditionsmannschaft_rio_a.jpg
./usr/share/doc/gramps/example/gramps/654px-Aksel_Andersson.jpg
./usr/share/doc/gramps/example/gramps/O1.jpg
./usr/share/doc/gramps/example/gramps/O2.jpg
./usr/share/doc/gramps/example/gramps/O0.jpg
./usr/share/doc/gramps/example/gramps/Gunnlaugur_Larusson_-_Yawn.jpg
./usr/share/gramps/images/splash.jpg
How should I proceed from here?
This sounds like volumes are not correctly shared between the task queue and web API containers. But you still haven’t shared your setup, so hard to say.
@JohnRDOrazio If you’d rather not share that data publicly, Discourse has Private Messaging options.
… and sorry for sounding impatient - long day
I’m not sure quite what you are expecting me to share. Here is the system info:
Gramps 5.1.6
Gramps Web API 1.4.1
Gramps.js 23.11.1
locale: en
multi-tree: false
task queue: true
I’m running the gramps server on Ubuntu 20.04, using the docker containers with docker compose
. All three containers are effectively running under the same grampsweb_default
network:
my@server:/srv/gramps/media$ docker inspect grampsweb_celery -f "{{json .NetworkSettings.Networks }}"
{"grampsweb_default":{"IPAMConfig":null,"Links":null,"Aliases":["grampsweb_celery","grampsweb_celery","ef6125fc0b60"],"NetworkID":"7e359126a00ce8e54e017f4149c71c43c6cd12513caeb43d425d216a76f2b46b","EndpointID":"b164ea3459f7b786f1ba342d587ead62ef6007c09d7b8367dc0ebb74597c883d","Gateway":"172.21.0.1","IPAddress":"172.21.0.4","IPPrefixLen":16,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:15:00:04","DriverOpts":null}}
my@server:/srv/gramps/media$ docker inspect grampsweb_redis -f "{{json .NetworkSettings.Networks }}"
{"grampsweb_default":{"IPAMConfig":null,"Links":null,"Aliases":["grampsweb_redis","grampsweb_redis","482d25b64eb8"],"NetworkID":"7e359126a00ce8e54e017f4149c71c43c6cd12513caeb43d425d216a76f2b46b","EndpointID":"6a5ef3510552a75b739dea12f4758c1419ee24a48f05b78f17f7d4504b01d306","Gateway":"172.21.0.1","IPAddress":"172.21.0.2","IPPrefixLen":16,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:15:00:02","DriverOpts":null}}
my@server:/srv/gramps/media$ docker inspect grampsweb-grampsweb-1 -f "{{json .NetworkSettings.Networks }}"
{"grampsweb_default":{"IPAMConfig":null,"Links":null,"Aliases":["grampsweb-grampsweb-1","grampsweb","e8a0bf143eaf"],"NetworkID":"7e359126a00ce8e54e017f4149c71c43c6cd12513caeb43d425d216a76f2b46b","EndpointID":"ee46704ff085e297e39b90089ac4cde310871678a98286eff17b5d25443ef0bb","Gateway":"172.21.0.1","IPAddress":"172.21.0.3","IPPrefixLen":16,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:15:00:03","DriverOpts":null}}
My docker compose file:
version: "3.7"
services:
grampsweb: &grampsweb
image: ghcr.io/gramps-project/grampsweb:latest
restart: always
ports:
- "32769:5000"
environment:
GRAMPSWEB_TREE: "Gramps Web" # will create a new tree if not exists
GRAMPSWEB_CELERY_CONFIG__broker_url: "redis://grampsweb_redis:6379/0"
GRAMPSWEB_CELERY_CONFIG__result_backend: "redis://grampsweb_redis:6379/0"
GRAMPSWEB_RATELIMIT_STORAGE_URI: redis://grampsweb_redis:6379/1
depends_on:
- grampsweb_redis
volumes:
- /srv/gramps/users:/app/users # persist user database
- /srv/gramps/index:/app/indexdir # persist search index
- /srv/gramps/thumb_cache:/app/thumbnail_cache # persist thumbnails
- /srv/gramps/cache:/app/cache # persist export and report caches
- /srv/gramps/secret:/app/secret # persist flask secret
- /srv/gramps/db:/root/.gramps/grampsdb # persist Gramps database
- /srv/gramps/media:/app/media # persist media files
- /srv/gramps/tmp:/tmp
grampsweb_celery:
<<: *grampsweb # YAML merge key copying the entire grampsweb service config
ports: []
container_name: grampsweb_celery
depends_on:
- grampsweb_redis
command: celery -A gramps_webapi.celery worker --loglevel=INFO
grampsweb_redis:
image: redis:alpine
container_name: grampsweb_redis
restart: always
I have verified the grampsweb_redis
container is using port 6379 internally; I believe there should be no need to expose a port since the containers are within the same docker network.
Here is the console log info from the grampsweb_redis
container:
1:C 15 Dec 2023 15:01:43.008 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
1:C 15 Dec 2023 15:01:43.008 * oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 15 Dec 2023 15:01:43.008 * Redis version=7.2.3, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 15 Dec 2023 15:01:43.008 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
1:M 15 Dec 2023 15:01:43.009 * monotonic clock: POSIX clock_gettime
1:M 15 Dec 2023 15:01:43.009 * Running mode=standalone, port=6379.
1:M 15 Dec 2023 15:01:43.010 * Server initialized
1:M 15 Dec 2023 15:01:43.010 * Loading RDB produced by version 7.2.3
1:M 15 Dec 2023 15:01:43.010 * RDB age 11 seconds
1:M 15 Dec 2023 15:01:43.010 * RDB memory usage when created 1.54 Mb
1:M 15 Dec 2023 15:01:43.010 * Done loading RDB, keys loaded: 7, keys expired: 0.
1:M 15 Dec 2023 15:01:43.010 * DB loaded from disk: 0.000 seconds
1:M 15 Dec 2023 15:01:43.010 * Ready to accept connections tcp
1:M 15 Dec 2023 16:01:44.034 * 1 changes in 3600 seconds. Saving...
1:M 15 Dec 2023 16:01:44.035 * Background saving started by pid 21
21:C 15 Dec 2023 16:01:44.038 * DB saved on disk
21:C 15 Dec 2023 16:01:44.038 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
Then it just continues repeating Background saving
and DB saved
and Fork CoW
(there’s no Spoon SheeP
though )
And here is the console log info from the celery container:
-------------- celery@ef6125fc0b60 v5.3.5 (emerald-rush)
--- ***** -----
-- ******* ---- Linux-5.4.0-152-generic-x86_64-with-glibc2.36 2023-12-15 15:01:50
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: default:0x7efd5d2d6dd0 (.default.Loader)
- ** ---------- .> transport: redis://grampsweb_redis:6379/0
- ** ---------- .> results: redis://grampsweb_redis:6379/0
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. gramps_webapi.api.tasks.export_db
. gramps_webapi.api.tasks.export_media
. gramps_webapi.api.tasks.generate_report
. gramps_webapi.api.tasks.import_file
. gramps_webapi.api.tasks.import_media_archive
. gramps_webapi.api.tasks.media_ocr
. gramps_webapi.api.tasks.search_reindex_full
. gramps_webapi.api.tasks.search_reindex_incremental
. gramps_webapi.api.tasks.send_email_confirm_email
. gramps_webapi.api.tasks.send_email_new_user
. gramps_webapi.api.tasks.send_email_reset_password
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
/usr/local/lib/python3.11/dist-packages/celery/platforms.py:829: SecurityWarning: You're running the worker with superuser privileges: this is
absolutely not recommended!
Please specify a different user using the --uid option.
User information: uid=0 euid=0 gid=0 egid=0
warnings.warn(SecurityWarning(ROOT_DISCOURAGED.format(
[2023-12-15 15:01:50,608: WARNING/MainProcess] /usr/local/lib/python3.11/dist-packages/celery/worker/consumer/consumer.py:507: CPendingDeprecationWarning: The broker_connection_retry configuration setting will no longer determine
whether broker connection retries are made during startup in Celery 6.0 and above.
If you wish to retain the existing behavior for retrying connections on startup,
you should set broker_connection_retry_on_startup to True.
warnings.warn(
[2023-12-15 15:01:50,621: INFO/MainProcess] Connected to redis://grampsweb_redis:6379/0
[2023-12-15 15:01:50,622: WARNING/MainProcess] /usr/local/lib/python3.11/dist-packages/celery/worker/consumer/consumer.py:507: CPendingDeprecationWarning: The broker_connection_retry configuration setting will no longer determine
whether broker connection retries are made during startup in Celery 6.0 and above.
If you wish to retain the existing behavior for retrying connections on startup,
you should set broker_connection_retry_on_startup to True.
warnings.warn(
[2023-12-15 15:01:50,629: INFO/MainProcess] mingle: searching for neighbors
[2023-12-15 15:01:51,643: INFO/MainProcess] mingle: all alone
[2023-12-15 15:01:51,656: INFO/MainProcess] celery@ef6125fc0b60 ready.
[2023-12-15 16:52:00,793: INFO/MainProcess] Task gramps_webapi.api.tasks.import_media_archive[502c216c-e50f-4a7b-adad-9989172e9264] received
[2023-12-15 16:52:04,009: INFO/ForkPoolWorker-4] Task gramps_webapi.api.tasks.import_media_archive[502c216c-e50f-4a7b-adad-9989172e9264] succeeded in 3.2137321531772614s: {'missing': 2, 'uploaded': 0, 'failures': 0}
Can I provide any more useful information?
David’s working GMT +1. So he might not see you new posting until tomorrow. If his 15 Dec. workday was overlong, he’s probably already long retired already.
Sure no problem, there’s no hurry. In the meantime I started seeing something strange, in the plesk interface I started seeing duplicated volumes, the first time with the correct bind mount, the second time with an empty bind mount. I’m guessing it might have to do with the grampsweb_celery
merging the grampsweb service config. I tried restarting the docker containers, I tried pruning containers and images and volumes, but to no avail. When using named volumes, the issue seemed to disappear. So I just copied all data from bind mounts to the named volumes, and recreating containers. I’m no longer seeing duplicated volumes, so I tried importing media again, but still no luck. The docker volume for media still shows as empty. Now when inspecting the media
resource on the api with postman I am seeing relative paths instead of absolute paths, but they are related to paths on my Desktop environment and not the server environment (for example /myuser/Documents/myTree_media/30953_148104-00404.jpg
). I launched a bash session in the docker container and searched again for jpegs but again all I found were the example jpegs as earlier.
Thanks for sharing the configuration and logs! What’s intersting is this line
[2023-12-15 16:52:04,009: INFO/ForkPoolWorker-4] Task gramps_webapi.api.tasks.import_media_archive[502c216c-e50f-4a7b-adad-9989172e9264] succeeded in 3.2137321531772614s: {'missing': 2, 'uploaded': 0, 'failures': 0}
Missing: 2 means that the celery container already found all media files except 2 (I guess you have more than two?), and it didn’t find the two missing ones in the ZIP file (if they were there, probably the checksum didn’t match the Gramps database, which typically happens for files edited outside of Gramps, like Word/Excel).
I suggest to use docker exec -it grampsweb_celery /bin/bash
to go inside the celery container and look at the /app/media
directory there. If it has your files, you just need to figure out what is wrong with the bind mounts.
The /app/media
folder is empty in the celery container, and when searching for any jpegs it finds only the example jpegs like before.
And yes I have more than two media files, I have 530!
I have not edited any of the files, they are exactly as they are in Gramps Desktop.
Here is a screenshot of what I’m seeing the whole time in my Gramps web page:
In Gramps Desktop instead I am seeing the actual media images.
Can you run import ZIP again and look at the log again? The missing: 2 entry that we saw above might have been before you reset the volumes. If celery says only 2 files are missing, it means it found the others, I don’t see another possibility.
I had in fact already run it again, and these are the logs:
[2023-12-16 03:56:00,254: INFO/MainProcess] Task gramps_webapi.api.tasks.import_media_archive[80f36249-deba-4559-a60e-23adcb256214] received
[2023-12-16 03:56:03,428: INFO/ForkPoolWorker-4] Task gramps_webapi.api.tasks.import_media_archive[80f36249-deba-4559-a60e-23adcb256214] succeeded in 3.1716511603444815s: {'missing': 1, 'uploaded': 0, 'failures': 0}
However I still have the empty media folder, whether inspecting grampsweb or grampsweb_celery or the docker volume, and media not showing in the interface. I don’t understand why the logs say uploaded: 0
: if media files were imported shouldn’t it show the number of successful imports?
I tried again to sync media through Gramps desktop using the Gramps Web Sync plugin, it said that the 500 and some odd media files were missing from the server and it started uploading them. But this only finished in an error “URL not found”. And the /app/media
folder within the grampsweb
container is still showing as empty…
Something is wrong with your volumes, but I don’t know what.
What happens when you upload a new media file through the web interface? Please check whether it appears in /app/media/, both in the grampsweb and celery containers.
I uploaded a new photo in a gallery of a person in the tree. I get nothing in the celery logs, however I do see a new file in the docker volume: ba9005f43c0b0fea96b625934abc8a05.png
.
I also see the same file in /app/media
in the grampsweb
container.
And I see the same file in /app/media
in the grampsweb_celery
container.
Inspecting the database, I see an entry for the same media file:
Whereas all other entries seem to have an empty checksum: