1. In the `docker-compose.dev.yml` file, increased ulimits for the containers that use TS code. This was one of the reasons for failures in my Podman (on macOS/arm64) environment. It wasn't the only failure with Podman, and didn't investigate further (I switched to Docker on Linux/amd64 after), but it can still help others.
2. Added a `make dev-down` to perform a `docker-compose down` on the dev environment
Signed-off-by: ItalyPaleAle <43508+ItalyPaleAle@users.noreply.github.com>
* added transcode configs for nvenc,qsv and vaapi
* updated dev docker compose
* added software fallback
* working vaapi
* minor fixes and added tests
* updated api
* compile libvips
* move hwaccel settings to `hwaccel.yml`
* changed default dockerfile, moved `readdir` call
* removed unused import
* minor cleanup
* fix for arm build
* added documentation, minor fixes
* added intel driver
* updated docs
styling
* uppercase codec and api names
* formatting
* added tests
* updated docs
* removed semicolons
* added link to `hwaccel.yml`
* added newlines
* added `hwaccel` section to docker-compose.prod.yml
* ensure mesa drivers are installed
* switch to mimalloc for sharp
* moved build version and sha256 to json
* let libmfx set the render device
* possible fix for vp9 on qsv
* updated tests
* formatting
* review suggestions
* semicolon
* moved `LD_PRELOAD` to start script
* switched to jellyfin's ffmpeg package
* fixed dockerfile
* use cqp instead of icq for qsv vp9
* updated dockerfile
* added sha256sum for other platforms
* fixtures
* Remove unnecessary PG_DATA environement variable from docker-compose.yml
There is no need to set the PostgreSQL data directory to the default location, it just adds an additional unnecessary line to the docker-compose file.
In addition, the PG_DATA isn't even the correct environment variable name (it should be PGDATA, see: https://hub.docker.com/_/postgres/), so this environment variable was never doing anything to begin with.
* Update docker-compose.dev.yml
* Update docker-compose.prod.yml
* Update docker-compose.test.yml
Previously, if you'd shut down the `make-dev` command and restart
the docker daemon (say, on a system reboot), there would be 3 immich
containers already running.
* using pydantic BaseSetting
* ML API takes image file as input
* keeping image in memory
* reducing duplicate code
* using bytes instead of UploadFile & other small code improvements
* removed form-multipart, using HTTP body
* format code
---------
Co-authored-by: Alex Tran <alex.tran1502@gmail.com>
* updated dockerfile, added typing, packaging
apply env change
* added arm64 support
* added ml version pump, second try for arm64
* added linting config to pyproject.toml
* renamed ml input field
* fixed linter config
* fixed dev docker compose
A default entrypoint and command make it just a bit easier to use the images as
there is no longer a need for an explicit entrypoint. The exception is the
server image, which still requires the shell script to be specified.
* default NODE_ENV to production for server image
* update node image to use 3.17 alpine in server
* update web docker image to use alpine 3.17
* remove NODE_ENV from production docker-compose
* NODE_ENV is also needed default in machine-learning
* build: add typesense to docker
* feat(server): typesense search
* feat(web): search
* fix(web): show api error response message
* chore: search tests
* chore: regenerate open api
* fix: disable typesense on e2e
* fix: number properties for open api (dart)
* fix: e2e test
* fix: change lat/lng from floats to typesense geopoint
* dev: Add smartInfo relation to findAssetById to be able to query against it
---------
Co-authored-by: Alex Tran <alex.tran1502@gmail.com>
* fix(machine-learning): Add the command to execute at startup
Previously it wasn't set in the Docker container but it should be.
* fix(docker): remove machine-learning command arg
* fix(docker): machine-learning CMD argument