Index index by Group index by Distribution index by Vendor index by creation date index by Name Mirrors Help Search

nvidia-container-toolkit-1.11.0-150200.5.6.1 RPM for ppc64le

From OpenSuSE Leap 15.6 for ppc64le

Name: nvidia-container-toolkit Distribution: SUSE Linux Enterprise 15
Version: 1.11.0 Vendor: SUSE LLC <https://www.suse.com/>
Release: 150200.5.6.1 Build date: Thu Oct 20 11:52:10 2022
Group: Development/Tools/Other Build host: sangiovese
Size: 9737023 Source RPM: nvidia-container-toolkit-1.11.0-150200.5.6.1.src.rpm
Packager: https://www.suse.com/
Url: https://github.com/NVIDIA/nvidia-container-toolkit
Summary: NVIDIA Container Toolkit
Build and run containers leveraging NVIDIA GPUs.

Provides

Requires

License

Apache-2.0

Changelog

* Mon Oct 10 2022 jura@suse.com
  - Update to version 1.11.0 (jsc#SLE-18750):
    * Ensure that base package is built for debian
    * Update libnvidia-container submodule
    * Bump version to v1.11.0
    * Update git commit command
    * Add release tests for fedora35
    * Clean up repo test scripts
    * Add fedora35 to release and signing scripts
    * Ensure CLI versions are set correctly for RPM packages
    * Add changelog for 1.11.0-rc.3
    * Update libnvidia-container
    * Update CUDA base image to 11.7.1
    * Update libnvidia-container submodule
    * Increase package build timeout to 3 hours for slow aarch64 builds
    * Use single config file for centos, al2, and fedora
    * Add fedora35 CI targets
    * Add fedora targets to release scripts
    * Add fedora35 package targets
    * Switch to single docker file yum-based rpm builds
    * Use new packages in toolkit image
    * Split nvidia-container-toolkit package
    * Fix centos8 test image
    * Fix indentation in makefile
    * Update vendoring
    * Specify hook structure instead of importing Podman
    * Fix cleanup of nvidia-container-toolkit link
    * Use proper cuda image for containerd tests
    * Update subcomponents
    * Update image used for containerd tests
    * Output applied config to toolkit container stdout
    * Ensure that toolkit-container sets correct default value
    * Fix setting of toolkit config option in toolkit container
    * Update libnvidia-container
    * Update vendoring
    * Use nvinfo package from go-nvlib
    * Add modifier to inject Tegra platform files
    * Bump version to 1.11.0-rc.3
    * Fix setting of LIBNVIDIA_CONTAINER_TAG
    * Add CHANGELOG entry for 1.11.0-rc.2
    * Update libnvidia-container
    * Allow accept-visible-devices config options to be set
    * Remove unused TOOLKIT_ARGS / --toolkit-args
    * Set toolkit root as flag
    * Rename toolkitDir toolkitRoot
    * Move global toolkitDir to options struct
    * Move toolkit options to struct
    * Bump version to 1.11.0-rc.2
    * Add changelog entries for 1.11.0-rc.1
    * Apply 1 suggestion(s) to 1 file(s)
    * Add root to mounts type
    * Make error message clearer
    * Remove Relative method from Locator
    * Fix bug where ldcache may not contain symlinks
    * Add tests for identifying libraries
    * Add nvidia-ctk runtime configure command
    * Move docker config handling to internal package
    * Ensure that CDI registry is refreshed
    * Add runtime config option for CDI spec dirs
    * Reuse check for existing hook
    * Update package descriptions and URLs
    * Update package definitions
    * Update references to nvidia-container-runtime-hook
    * Rename -toolkit executable to -runtime-hook
    * Skip packages that already exist
    * Use centos:stream8 image for signing
    * Use device host path to determine properties
    * Update vendored runc version
    * Update cdi package and run go mod vendor
    * Add support for specifying devices in annotations
    * Add cdi mode to NVIDIA Container Runtime
    * The licenses make target should not be a check target
    * Add charDevices discoverer for devices
    * Create single discoverer per mount type for CSV
    * Add tooling to check go licenses
    * Rename discover.NewList to discover.Merge
    * Add Relative function to Locator interface
    * Use CUDA.DevicesFromEnvvar to check if modifications are required
    * Add DevicesFromEnvvars function to CUDA image
    * Add /etc/cufile.json to list of required mounts
    * Create GDS and MOFED modifiers
    * Add discovery of GDS and MOFED devices
    * Allow globs in filenames for locators
    * Move cmd/nvidia-container-runtime/modifier package to internal/modifier
    * Use modifier list and discoverModifer
    * Add lists of modifiers to allow for modifier compositioning
    * Ensure test/output path exists
    * Update vendoring
    * Update nvidia-docker and nvidia-container-runtime
    * Update nvidia-docker and nvidia-container-runtime branches to main
    * Allow any 1.* version of libnvidia-container package
    * Switch to latest docker and docker dind in CI
    * Allow libnvidia-container1 version to be specified directly
    * Update build scripts to set libnvidia-container version
    * Bump version to 1.11.0-rc.1
    * Update libnvidia-container submodule to v1.10.0
    * Bump version to v1.10.0
    * Update toolkit images to use NGC DL license
    * Bump nvidia-docker version
    * Switch default container-toolkit image target to ubuntu20.04
    * Only generate amd64 images for ubuntu18.04
    * Remove build and release of centos8 container-toolkit images
    * Use ubi8 base image for centos8
    * Bump CUDA base image version to 11.7.0
    * Update config files with options and defaults
    * Update NVIDIA Container Runtime readme
    * Update libnvidia-container
    * Bump version to 1.10.0-rc.4
    * Also set default_runtime.options.BinaryName
    * Also cleanup v1 default_runtime if BinaryName is set
    * Also set Runtime file v1 containerd runtime config
    * Use BinaryName for v1 containerd runtime config
    * Update libnvidia-container
    * Return default config if config path is not found
    * Ignore NVIDIA_REQUIRE_JETPACK* for image requirements
    * Fix bug in tegra detection
    * Fix assertCharDevice matching on all files
    * Include git commit in changelog URL
    * Automatically generate changelogs in docker builds
    * Add dummy entry for rpm changelog matching other components
    * Format CHANGELOG.md as markdown
    * Move debian changelog to CHANGELOG.md
    * Update libnvidia-container version
    * Bump version to 1.10.0-rc.3
    * Update libnvidia-container
    * Update changelog for release
    * Ensure that git commit is set in docker build
    * Set the version and gitCommit in the Makefile
    * Add version output to CLIs
    * Call logger.Reset() to ensure errors are captured
    * Skip setting of log file for --version flag
    * Include HasNVML check in ResolveAutoMode
    * Add HasNVML function to check if NVML is supported
    * Remove unneeded legacy discovery
    * Remove --force flag from nvidia-container-runtime-hook
    * Replace experimental and discover-mode
    * Move ResolveAutoMode to info package
    * Move isTegraSystem to internal info package
    * Update nvidia-container-runtime config options
    * Use toml unmarshal to read runtime config
    * Add hook to create specific links
    * Add --link option to nvidia-ctk hook create-symlinks command
    * Factor linkCreation into method
    * Improve symlink creation loop
    * Use singular instead of plural for hook arguments
    * Use executable locator to find low-level runtime
    * Use lookup.GetPath from runtime hook
    * Add lookup.GetPath and lookup.GetPaths functions
    * Use state.GetContainerRoot in nvidia-ctk hook subcommands
    * Add GetContainerRoot to oci.State type
    * Support runc logging command line options
    * Make output of bundle directory a debug message
    * Switch to debug logging when locating runtimes
    * Add nvidia-container-runtime.runtimes config option
    * Fix form -> from in comment
    * Add debug logging when checking requirements
    * Add compute capability of first device as arch property
    * Add CUDA ComputeCapability function
    * Add debug log for command line arguments
    * Return low-level runtime if subcommand is not create
    * Check requirements before creating CSV discoverer
    * Add processing for requirements and constraints
    * Return raw spec from Spec.Load
    * Add basic CUDA wrapper
    * Use CUDA image abstraction for runtime hook
    * Add CUDA image abstraction
    * Add gcc for Amazonlinux builds
    * Use go install to install go development tools
    * Bump golang version to 1.17.8
    * Update go vendoring
    * Fix image building due to GPG key update
    * Use semver package to parse CUDA version
    * Update libnvidia-container reference
    * libnvidia-container: 'main' track branch
    * Remove dockerhub publishing
    * Bump github.com/containers/podman/v4 from 4.0.1 to 4.0.3
    * Improve handling of git remotes for gh-pages packages
    * Add scripting to sign and publish packages
    * Add envvar for package versions
    * Rename release.sh to build-packages.sh
    * Change master references to main
    * Update libnvidia-container submodule
    * Bump version to v1.10.0-rc.2
    * Add commented experimental option to config files
    * Revert "[ci] Skip external releases if associated OUT_REGISTRY value is empty."
    * Revert "[ci] echo skipped commands"
    * Update libnvidia-container
    * Add log-level config option for nvidia-container-runtime
    * Remove exsiting NVIDIA Container Runtime Hooks from the spec
    * Specify --force flag when invoking nvidia-container-runtime-hook
    * Raise error if hook invoked in experimental mode without force flag
    * Export GetDefaultRuntimeConfig
    * Make order of discoverers deterministic
    * Refactor CSV discovery to make char device discovery clearer
    * Fix creation of CSV parser in create-symlinks
    * Fix creation of CSV parser in create-symlinks
    * Move NVIDIA Container Runtime Hook executable name to shared constant
    * Use DefaultExecutableDir to determine default paths
    * Refactor CSV file parsing
    * Add missing close when reading CSV file
    * Return unmodified runtime if specModifier is nil
    * Inject symlinks hook for creating symlinks in a container
    * Add create-symlinks subcommand to create symlinks in container for specified CSV files
    * Move reading of container state for internal/oci package
    * FIX: Rename containerSpec flag to container-spec
    * Include nvidia-ctk in deb and rpm packages
    * Add cache for mounts
    * Add discovery for ldconfig hook that updates the LDCache
    * Add nvidia-ctk config section
    * Add hook command to nvidia-ctk with update-ldcache subcommand
    * Add stub nvidia-ctk CLI
    * Refactor hook creation
    * Add auto discover mode and use this as the default
    * FIX: Rename DefaultRoot to DefaultMountSpecPath
    * FIX: Improve locator map construction
    * FIX: Update TODO for container path
    * FIX: Use MountSpec* constants
    * FIX: Remove unused NewFromCSV constructor
    * Correct typo in constructor name
    * Add support for NVIDIA_REQUIRE_JETPACK envvar
    * Add csv discovery mode to experimental runtime
    * Add CSV-based discovery of device nodes
    * Add CSV-based discovery of mounts
    * Add locators for symlinks and character devices
    * Add code to process Jetpack CSV files
    * FIX: Make isNVIDIAContainerRuntimeHook mode idiomatic
    * FIX: Simplify hook remover
    * FIX: Rename path locator as executable locator
    * FIX: Rename CLIConfig to ContainerCLIConfig
    * FIX: Factor out specModifier construction into function
    * FIX: Don't log that hooks is being removed if it is not
    * FIX: Fix typo in comment
    * [ci] echo skipped commands
    * Fix typo in variable name
    * Add basic README for nvidia-container-runtime
    * Make error logging less verbose by default
    * Implement hook remover for existing nvidia-container-runtime-hooks
    * Read top-level config to propagate Root to experimental runtime
    * Split loading config from reader and getting config from toml.Tree
    * Implement experimental modifier for NVIDIA Container Runtime
    * Add stable discoverer for nvidia-container-runtime hook
    * Add lookup abstraction for locating executable files
    * Move runtime config to internal package
    * Don't skip internal packages for linting
    * Add experimental option to NVIDIA Container Runtime config
    * Update libnvidia-container
    * [ci] Skip external releases if associated OUT_REGISTRY value is empty.
    * Move modifier code for inserting nvidia-container-runtime-hook to separate package
    * Import modifying runtime abstraction from experimental runtime
    * Add test package with GetModuleRoot and PrependToPath function
    * Ensure that Exec error is also logged to file
    * Update go vendoring
    * Update podman hooks dependency
    * Add .shell make target for non-Linux development
    * Add gcc for centos package builds including cgo
    * Update gitignore
    * Switch from centos:8 to centos:stream8 images to build centos8 packages
    * Update git submodules
    * Update libnvidia-container submodule to v1.10.0-rc.1
    * Bump version to 1.10.0-rc.1
    * Use nvcr.io registry for Ubuntu CUDA base images
    * Add CI definitions for building and publishing Ubuntu20.04 images
    * Upcate libnvidia-container submodule
    * Bump version to 1.9.0
    * Update libsasl in both ubuntu/ubi toolkit images to address CVE-2022-24407
    * Update libnvidia-container subcomponent
    * Use 'none' instead of 'NONE' to skip containerd restart
    * Add --restart-mode to docker config CLI
    * Update component submodules
    * Fix pushing of short tag for devel images
    * Add multi-arch image scans
    * Also search /usr/lib/aarch64-linux-gnu for libnvidia-container libs
    * Enable multi-arch builds in CI
    * Enable multi-arch builds
    * Allow buildx to be used for mulit-arch images
    * Rename TARGETS make variable to DISTRIBUTIONS
    * Specify docker platform args for build and run commands
    * Ensure that Ubuntu20.04 images also build
    * Remove unneeded build-all CI steps
    * Fix centos8 builds
    * Update submodules
    * Remove unneeded build-all CI steps
    * Update submodules
    * Fix centos8 builds
    * Bump version to 1.9.0-rc.1
    * Update centos:8 mirrors for release tests
    * Update libnvidia-container submodule
    * Update changelogs
    * Update libnvidia-container submodule
    * Bump version to 1.8.1
    * Fix changelog entry in rpm spec
    * Update component submodules
    * Bump version to 1.8.0
    * Use 2h30m timeout for all packaging stages
    * Update centos8 mirrors
    * Update sub-modules
    * Bump version to 1.8.0-rc.3
    * Update libnvidia-container submodule
    * Update components before building release
    * Copy libnivida-container-go.so to toolkit directory
    * Remove support for amazonlinux1
    * Add check for matching toolkit and lib versions to release script
    * Update git submodules
    * Bump version to 1.8.0-rc.2
    * Update CUDA image version to 11.6.0
    * Update libnvidia-container submodule for WITH_NVCGO CI build fix
    * Update libnvidia-container submodule
    * Update nss on centos7 to address CVEs
    * Allow packages to be specified to address CVEs
    * Update libnvidia-container submodule
    * Bump version post 1.7.0 release
    * Enable release of toolkit-container images
    * Simplify skipping of scans
    * Add delay and timeout to image pull job
    * Pull public staging images to scan and release
    * Address review comments
    * Add script to pull packages from packaging image
    * Add placeholder for testing packaging image
    * Add packaging target to CI
    * Add packaging target that includes all release packages
    * Include all architecture packages in toolkit container
    * Update libnvidia-container submodule
    * Bump version to 1.7.0
    * Update submodules
    * Bump golang version to 1.16.4
    * Add versions.mk file to define versions
    * Specify nvidia-container-runtime and nvidia-docker versions
    * Bump post 1.7.0-rc.1 release
    * Don't rebuild packages for every local run
    * Add basic multi-arch support to release tests
    * Rework init repo for centos8 release tests
    * Update libnvidia-container dependency for release
    * Update changelog
    * Update vendoring
    * Specify containerd runtime type as string
    * Update submodules
    * Override LIB_TAGS for runtime and docker wrapper
    * Bump post 1.6.0 release
    * Add jetpack-specific config.toml
    * Specify config.toml file suffix as docker build arg
    * Add nvidia-container-config option to overide drivercapabilities
    * Update components versions for 1.6.0 release
    * Bump version to 1.6.0
    * Fix logging to stderr instead of file logger
    * [ci] remove --pss flag from pulse scanning
    * Check for matching tags in release script
    * Get tags for all components in get-component-versions script
    * Update submodules
    * Update libnvidia-container to ff6ed3d5637f0537c4951a2757512108cc0ae147
    * Update libnvidia-container submodule to 1.6.0-rc.3
    * Bump version post v1.6.0-rc.2 release
    * Update submodules for packaging
    * Update nvidia-docker submodule
    * Auto update debian changelog and release date
    * Update nvidia-docker submodule
    * Forward nvidia-container-toolkit versions to dependants
    * Update libnvidia-container submodule
    * Update nvidia-container-runtime submodule
    * Update imported OCI runtime spec
    * Add basic test for preservation of OCI spec under modification
    * [ci] use pulse instead of contamer for scans
    * Import cmd/nvidia-container-runtime from experimental branch
    * Remove unneeded files
    * Import internal/oci package from experimental branch
    * Rename RELEASE_DEVEL_TAG for consistency
    * Remove rule for merge requests
    * Add changelog entry for config.json path changes
    * Bump version to 1.6.0-rc.2
    * skip error when bundleDir not exist
    * Remove rule for merge requests
    * Add internal CI definition for release
    * Add CI to build toolkit-container image
    * Add dockerfile and makefile to build toolkit-container
    * Copy container test scripts from container-config
    * Copy cmd from container-config
    * Update go vendoring
    * Update submodules
    * Add aarch64 build for Amazon Linux 2
    * Extend release testing toolking to allow for upgrade testing
    * Use build image directly in CI
    * Update DEVELOPMENT.md
    * Update submodules for packaging fixes
    * Add docker-based tests for package installation workflows
    * FIXUP: Update development
    * Require at least a matching libnvidia-container-tools version
    * Use consistent package revisions for all rpm-based packages
    * Fix nvidia-container-runtime breaks / replaces dependency
    * Add release script for specific targets
    * Remove docker-all target from Makefile
    * Add basic version checks
    * Add nvidia-docker as a git submodule
    * Add nvidia-container-runtime as a git submodule
    * Add libnvidia-container as a git submodule
    * Apply edits for the NVIDIA container toolkit
    * Copy README.md from nvidia-docker
    * Copy scripts from nvidia-container-toolkit-release
    * Bump version for 1.6.0 development
    * Update debian and rpm package definitions
    * Add PREFIX make variable to control command output
    * Make all commands and copy executables
    * Add cmds target to makefile to build all go commands
    * Update go vendoring
    * Update package references
    * Copy code from nvidia-container-runtime
    * Bump version for post 1.5.1 development
    * Revert "Add support for NVIDIA_FABRIC_DEVICES"
    * Revert "Bump version to 1.6.0~rc.1"
    * Bump version to 1.6.0~rc.1
    * Add support for NVIDIA_FABRIC_DEVICES
    * Use extends keyword for build-one and build-all
    * Improve CI for container toolkit
* Wed Jun 30 2021 mjura@suse.com
  - Update to version 1.5.1:
    * Bump to version 1.5.1
    * Explicitly set GOOS when building binary
    * Add coverage step to CI
    * Move pkg to cmd/nvidia-container-toolkit
    * Run go mod vendor and go mod tidy
    * Fix bug where docker swarm device selection is overriden by NVIDIA_VISIBLE_DEVICES
    * Use require package for tests
    * Add coverage to go tests
    * Update vendoring
    * Bump version to 1.5.0
* Wed Apr 08 2020 sgrunert@suse.com
  - Add initial version

Files

/etc/nvidia-container-runtime
/etc/nvidia-container-runtime/config.toml
/usr/bin/nvidia-container-runtime
/usr/bin/nvidia-container-runtime-hook
/usr/bin/nvidia-ctk
/usr/share/containers/oci/hooks.d/oci-nvidia-hook.json
/usr/share/licenses/nvidia-container-toolkit
/usr/share/licenses/nvidia-container-toolkit/LICENSE


Generated by rpm2html 1.8.1

Fabrice Bellet, Tue Jul 9 19:51:39 2024