In commit 405fcbc1e ("pveceph: switch repo sources to modern deb822
format") we switched away from single-line repos, but the later
rebased commit 9c0ac59e0 ("fix #5244 pveceph: install: add new
repository for offline installation") seemingly missed that change and
was not re-tested, thus it referenced the previous variable name and
old file ending, as this was already pushed let's fix it as this
follow-up.
Fixes: 9c0ac59e0 ("fix #5244 pveceph: install: add new repository for offline installation")
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The new 'offline' repository option will not try to configure the Ceph
repositories during installation.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Tested-by: Christoph Heiss <c.heiss@proxmox.com>
Link: https://lore.proxmox.com/20250714083838.68483-2-a.lauterer@proxmox.com
by adding a 4th repository option called 'offline'. If set, the ceph
installation step will not touch the repository configuration.
We add a simple version check to make sure that the latest version
available (and to be installed) does match the selected major Ceph
version.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Tested-by: Christoph Heiss <c.heiss@proxmox.com>
Link: https://lore.proxmox.com/20250714083838.68483-1-a.lauterer@proxmox.com
The return props now include programmatically added properties guest,
jobnum, and digest (the latter only being returned in read endpoint)
in addition to the create schema.
Signed-off-by: Nicolas Frey <n.frey@proxmox.com>
Link: https://lore.proxmox.com/20251002124728.103425-3-n.frey@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The Arch property now includes all officially supported debian
Architectures [0]. These could be extended to include unofficial
ones as well, though I don't see a reason to currently do this.
The CurrentState property now includes all variants according to
the documentation of package AptPkg::Cache.
Also implemented suggestion to clone $apt_package_return_props
instead of modifying, which could've potentially resulted in
unwanted behaviour.
[0] https://wiki.debian.org/SupportedArchitectures
Signed-off-by: Nicolas Frey <n.frey@proxmox.com>
Link: https://lore.proxmox.com/20251002124728.103425-2-n.frey@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Add the missing 'Reset' action to the VM command menu. All other
actions for managing the VM run state were already present.
Also this patch does not add reset option to containers as they do not
have this option and reboot is already present for containers.
Fixes: https://bugzilla.proxmox.com/show_bug.cgi?id=4248
Signed-off-by: Amin Vakil <info@aminvakil.com>
Link: https://lore.proxmox.com/mailman.631.1754383164.367.pve-devel@lists.proxmox.com
[FE: small improvements to the commit message]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
After looking at the code, I don't see any issues with using the
createSchema method. Additionally tested by comparing the output of:
- pvesh usage /cluster/acme/plugins --returns
- pvesh get /cluster/acme/plugins
confirmed that the contents match.
Signed-off-by: Nicolas Frey <n.frey@proxmox.com>
Link: https://lore.proxmox.com/20250924115904.122696-2-n.frey@proxmox.com
[TL: rename variable used for schema for more clarity]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
When using the vncshell or xtermjs based shells we use a SSH tunnel if
the requested node is different from the one the request was made too.
Further we special case sessions from the root@pam user, as they
already got verified, so do not need to re-login on the target shell.
With Debian Trixie based releases we had to make a change to make this
shell work again [0], it seems that there is still a race in the
whole interaction, at least if there is a nested login shell. See [1]
for more details.
The Debian Trixie version of the `login` manual page explicitly
mentions a bug related to this:
> A recursive login, as used to be possible in the good old days, no
> longer works; for most purposes su(1) is a satisfactory substitute.
> Indeed, for security reasons, login does a vhangup(2) system call
> to remove any possible listening processes on the tty. This is to
> avoid password sniffing. If one uses the command login, then the
> surrounding shell gets killed by vhangup(2) because it’s no longer
> the true owner of the tty. This can be avoided by using exec login
> in a top-level shell or xterm.
-- man 1 login
IIRC this was checked back when implanting [0], but as it was during
quite eventful bootstrapping times of Debian Trixie releases I'm not
100% certain about that.
If the issues our users sometimes (?) see stem indeed from a race
related to above bug, we should be able to avoid that by dropping the
explicit login call for root@pam when tunneling. Other @pam users
would be still affected, but as the partial fix is so simple and
correct in any way it's still worth roll it out sooner.
To improve this more broadly the following two options seem most
promising:
1. Replace the manual SSH tunnel by our proxy-to-node infrastructure
and tunnel the websocket of the target nodes termproxy command
through that.
2. Use a wrapper tool to handle the login command such that it does
not interferes with the outer login, could potentially even be an
existing one like dtach, or alternatively something we write
ourselves.
I did not yet evaluate impact and work required for either option to
happen, but from an experienced gut feeling the first option would be
the better one, especially as we want to drop SSH for tunneling
completely in the mid to long term.
As this is not a complete and certain fix for #6789 [1] and especially
as we cannot really reproduce this ourselves here in any useful way,
I refrained from adding a fix # to the commit message, but it should
be a partial fix.
[0]: https://git.proxmox.com/?p=pve-xtermjs.git;a=commitdiff;h=7b47cca8368e63c30f6227442570f9f35dd7ccf0
[1]: https://bugzilla.proxmox.com/show_bug.cgi?id=6789
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Rename the not so telling $remcmd to $tunnel_cmd and make the helper
prefix it to the actual command.
This is a preparation of adapting login for proxied (tunneled) shells,
i.e. where the user requests a shell from another node than they
opened the web UI for.
No semantic change intended.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the recent commit
d2660fc7 (ui: resource tree: improve performance on initial update)
changed how we construct the resource tree, namely outside the
treestore. While it worked fine mostly, the standard
`Ext.data.TreeModel` was used. This lead to problem with the detection
of some things, since we expected all properties that were defined on
the custom `PVETree` model.
To fix this, create an instance of `PVETree` instead.
Note that this might also fix other things that depend on the
PVETree specific properties on the datacenter root node.
Fixes: d2660fc7 (ui: resource tree: improve performance on initial update)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Michael Köppl <m.koeppl@proxmox.com>
Link: https://lore.proxmox.com/20250912113242.3139402-1-d.csapak@proxmox.com
As this is for the frontend only and the API will fail due to getting
an unknown property.
Reported-by: Alexander Zeidler <a.zeidler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
SDN entities return their name in the sdn property. Add this property
to the schema so it is shown in the documentation, as well for
generating proper types in proxmox-api-types.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Link: https://lore.proxmox.com/20250909155423.526917-2-s.hanreich@proxmox.com
Quite a bit was unused while a few others were missing and only worked
because other modules loaded them already.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This unifies it with other recent changes, as 75% often really is not
a problematic usage, while 80% isn't always problematic either, it's
still closer in practice to a load that might need to be checked out
and use sites can now override this to what makes sense most for their
semantics.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Use the same thresholds for warning and critical as we do now for the
node status overview, see commit 6df3a71bd ("ui: node status: increase
warning/critical threshold percentage for memory usage") for details.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
With a similar rationale than commit f70ea8f7c ("ui: node status:
increase warning/critical threshold percentage for memory usage"),
i.e. memory is there to be used and leveraging most of what's
configured is an OK thing to do, so no need to show a warning status
at already 75% of usage.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Memory is there to be used and unlike with block/file storage it can
be fine, even wanted to use all memory besides a bit of headroom.
As memory is easy to defragment (it's all virtual addresses anyway)
and some usage is also easy to evict (e.g. to SWAP if not really used
at the moment or as it is only an advanced cache like the ZFS ARC).
Without the override the threshold where 75% for warning and 90% for
showing a critical status. For a host with 396 GiB of installed memory
that meant that we warned already on 297 GiB used (99 GiB still
available!) and showed a critical status with ~365 GiB in use (~40 GiB
free), both can be still very OK and warranted usages though.
So increase the thresholds to 90% for warning and 97.5% for critical
usage displays, which provides a less "scare-mongering" display.
Ideally we'd support a callback to make a better decision, as clamping
on totals might be even better, but this simple change makes it
already much better. Add a comment that we might want to split out the
ARC in its own custom bar (I got a prototype around, but that needs
polishing and in-depth review).
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This avoids a "compile" check error (i.e. perl -wc) if no other module
that pulls in that dependency is loaded already.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This allows one to query the metrics stats from a specific set of
nodes. This can, e.g., help on batch querying the stats in bigger
clusters, like in a 15 node cluster one could do 3 requests at 5 nodes
each concurrently.
Doing such things can especially help on clusters with many virtual
guests, as there the time required to gather all stats quickly adds
up, more so if the time window is long, and we got a 30s overall
timeout here currently due to being proxied to the privileged API
daemon as we cannot reuse the API ticket and need to generate a new
one.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
5s is rather short, especially if one has many guests on a node and
queries for a (relatively) period, e.g. for a node with 4000 and
querying the last 10 minutes of data needs a bit over 8s on a test
cluster of mine.
So increase the timeout from 5s to 20s as a stop-gap. Note that due to
being a protected API call we are limited by the 30s proxy timeout in
total, which is something that might be revisited but out of the scope
of this change, especially as it would need to be changed in the
pve-http-server git repo anyway.
There are other ideas floating around in making this more reliable,
like making this async here (anyevent or a dedicated executable),
allowing the requester to pass a node-list to allow batching the
queries and so on.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Makes it easier to find specific guests when one got many of them.
I decided to not allow filtering the current pool name, as often there
is none set for the guest the user wants to add and one could
theoretically sort the current pool column to group guests by them,
and finally, we can still add that easily if it gets requested with a
good enough rationale.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
It was rather small, i.e. always to narrow for seeing the column
values for somewhat normal name lengths and for more than a handful of
guests the height was rather short too.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The missing `deleteEmtpy` attribute caused the fabric property to never
be removed from the EVPN controller. So when e.g. removing the fabric to
add the peers manually, there would always be an error because you can't
have peers *and* fabrics configured.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Link: https://lore.proxmox.com/20250821085015.29401-1-g.goller@proxmox.com
This will make pveceph pool ls report the 'Used' column specifying its
units:
$ pveceph pool ls --noborder
... Used
... 2.07 MiB
... 108.61 KiB
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Tested-by: Shannon Sterz <s.sterz@proxmox.com>
Reviewed-by: Shannon Sterz <s.sterz@proxmox.com>
Link: https://lore.proxmox.com/20250829094401.223667-2-m.sandoval@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This change ensures that the storage unites are displayed consistently
between the graph and the usage label right above.
When setting the unit to `bytes` the graph will now, for example, show
"130 GB" instead of "130 G", which matches the usage displayed above
and removes any ambiguity about whether "G" refers to GiB or GB.
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Tested-by: Shannon Sterz <s.sterz@proxmox.com>
Reviewed-by: Shannon Sterz <s.sterz@proxmox.com>
Link: https://lore.proxmox.com/20250829094401.223667-1-m.sandoval@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This was displayed wrongly on the web UI and when calling
pvesh get /nodes/localhost/apt/changelog
One potential example of such a package was bind9-dnsutils where the
character `ř` was rendered as `Å`.
Reported-by: Lukas Wagner <l.wagner@proxmox.com>
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
Link: https://lore.proxmox.com/20250902145345.500823-1-m.sandoval@proxmox.com
Some backslashes were not indented correctly and some used spaces
instead of tabs. Most were introduced in the recent fabrics series
(29ebe4e8d4 ("ui: fabrics: add model definitions for fabrics") and
following).
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Link: https://lore.proxmox.com/20250905125435.231976-1-g.goller@proxmox.com
There are quite a few kebab-cased properties in the QEMU CPU config,
such as phys-bits and guest-phys-bits. These are currently not exposed
through the web interface, but only the command line.
If the QEMU CPU config is parsed, it will return undefined with an
error and will break the ProcessorEdit component so that changes cannot
be submitted anymore.
Fix that by allowing kebab-cased properties as well.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Link: https://lore.proxmox.com/20250905142249.219371-1-d.kral@proxmox.com
[TL: drop unnecessary escape]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
when trying to detect changes of resources, we compare a list of
properties of the existing nodes in the tree with the ones we got from
the api call, so that we can update only those that changed.
One of these properties is the 'text' one, which is calculated from e.g.
the vmid and name (or the name and host, depending on the type).
Sadly, when inserting/updating the node, we modified the text property
in every case, at least adding a '<span></span>' around the existing
text. This meant that every resource was updated every time instead of
only when something changed.
To fix this, remote the 'text' property from the to checked ones, and
add all the properties that are used to compile the text one.
This reduces the time of updateTree in my test-setup (~10000 guests)
when nothing changed from ~100ms to ~15ms and reduces scroll stutter
during such an update.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Link: https://lore.proxmox.com/20250905120627.2585826-5-d.csapak@proxmox.com
When we insert nodes into the tree, we use 'insertBefore' of extjs'
NodeInterface. When the node is inside a TreeStore, it calls
'registerNode' to handle some events and accounting. Sadly it does so
not only for the inserted node, but also for the node in which is
inserted too and that calls 'registerNode' again for all of its
children.
So inserting a large number of guests under node this way has (at least)
O(n^2) calls to registerNode.
To workaround this, create the first tree node structure outside the
TreeStore and add it at the end. Further insertions are more likely to
only come in small numbers. (Still have to look into if we can avoid
that behavior there too)
This improves the time spend in 'registerNode' (in my ~10000 guests test
setup) from 4,081.6 ms to about 2.7ms.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Link: https://lore.proxmox.com/20250905120627.2585826-4-d.csapak@proxmox.com
when we fetch the cluster resources, the ui calculates some fields for
each resource, such as 'hostmem_usage'. This requires the maxmem
attribute from the host which we have to look up in the resource store.
Since the data is not sorted in the store itself, extjs linearly goes
through the records to make the lookup when we use 'findExact'.
So instead of using findExact for every guest (which iterates the whole
resource store again), use a 'nodeCache' there and clear it out before
we load the new data.
This reduces the total time used for 'cacluate_hostmem_usage'
in my test setup (~10000 non-running vms) from 4,408.2 ms to 12.4 ms.
(Measured with the `Performance` tab in Chromium)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Link: https://lore.proxmox.com/20250905120627.2585826-3-d.csapak@proxmox.com
the idea was that we get any of the 'new' versions on lookup, but that
lead to iterating through possibly all keys. Since that was called for
each resource in e.g. /cluster/resources api call, the runtime was
O(n^2) for the number of resources.
To avoid that, simply look up the currently only valid key here which
makes this lookup much cheaper.
In my test setup with ~10000 guests, it reduces the time for a call
to /cluster/resources from ~22s to ~400ms
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
Link: https://lore.proxmox.com/20250905120627.2585826-2-d.csapak@proxmox.com
So we only sort the enum keys once and the documentation and API dumps
don't keep reordering them, and to deduplicate the cmd and cmd-opts
schema.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Link: https://lore.proxmox.com/20250902125516.346145-1-w.bumiller@proxmox.com
[TL: use hash directly instead of hash-as-array]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Setting the `disable` property to 0 does not make a HA rule being
enabled anymore since a change in the HA Manager, so explicitly delete
the `disable` property when the HA rule should be enabled.
Reported-by: Friedrich Weber <f.weber@proxmox.com>
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Link: https://lore.proxmox.com/all/20250901082956.40297-1-d.kral@proxmox.com
.load() must be called on the underlying UpdateStore, not on the
DiffStore.
Currently, this results in the members list being cleared after
adding/removing an entry, only being reloaded (correctly) during the
next automatic background update.
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Link: https://lore.proxmox.com/all/20250827101430.424606-2-c.heiss@proxmox.com
FG: added bug reference to title
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
this was accidentally dropped when applying this commit:
2348790b (ui: replace the ceph logo png with an svg version)
likely due to line length limits of email. so reformat the svg and add
it back in again.
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Link: https://lore.proxmox.com/all/20250829092617.89627-3-s.sterz@proxmox.com
this was accidentally dropped when applying:
53cf0269 (ui: use svg version of the virt viewer icon)
likely due mail line length limits. so add the svg again and reformat
it to conform to the limit.
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Tested-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Link: https://lore.proxmox.com/all/20250829092617.89627-2-s.sterz@proxmox.com
the container here is 24px tall, the icon itself is rendered with a
height of 14px, so it needs to be rendered 5px from the top to appear
vertically centered.
Reported-by: Christoph Heiss <c.heiss@proxmox.com>
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Link: https://lore.proxmox.com/20250827102428.101666-2-s.sterz@proxmox.com
a lot of icons are no longer used, so remove them and, in rare cases,
their accompanying css classes. all of these are png files and would,
thus, appear blurry if used. this should somewhat reduce the size of
the pve-manager package, but more importantly, discourage the use of
icons that would only appear blurry on modern hardware anyway.
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Link: https://lore.proxmox.com/20250826145836.302748-10-s.sterz@proxmox.com
the current spinner gif is quite blurry on modern hardware so use an
svg based spinner instead.
note that by including the css animation in the svg itself, the
animation does not appear to restart when extjs re-renders a table.
this, at least in firefox 128, lead to jerky looking animation when
using font awesome's spinner with the `fa-spin` class. by creating our
own svg, we can also make it look more like the extjs version.
this does not impact the spinners used by extjs's load mask, as that
would require adapting extjs's code itself.
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Link: https://lore.proxmox.com/20250826145836.302748-4-s.sterz@proxmox.com
and use the font awesome equivalent instead. this way the icons don't
appear blurry and we safe a bit of overhead for storing and loading
the png.
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Link: https://lore.proxmox.com/20250826145836.302748-3-s.sterz@proxmox.com
previously we used custom png rendered for storages in the add menu of
a pool, as well as for mountpoints of containers. these appeared
blurry on high resolution displays and since they were just the same
as the font-awesome icons anyway, use those directly. the ui already
loads font-awesome regardless, so there are no down-sides here.
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Link: https://lore.proxmox.com/20250826145836.302748-2-s.sterz@proxmox.com
I could not find _any_ reference to it in current checkouts of ~every
Proxmox repository.
AFAICT this was copied from pve-installer in pve-installer commit
f0583fd4e ("copied country.pl form pve-manager")
in 2017 and simply never dropped here afterwards, so it's an unused
leftover.
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Link: https://lore.proxmox.com/20250828103621.936206-1-c.heiss@proxmox.com
As reported in the community forum [0], the 'Edit' button would stay
disabled. Use a proxmoxButton to fix it, which installs a monitor for
selectionchange.
[0]: https://forum.proxmox.com/threads/169258/post-792029
Fixes: fb289b0d ("ha: affinity rules: make edit/remove button declrative configs")
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
On some (non-standard) setups having systemd-boot installed, causes
issues, even if the system is using proxmox-boot-tool (p-b-t) for booting.
The currently observed edge-case is:
before the upgrade:
* system is booted with grub (w/o secure boot), using p-b-t, results
in the ESP not being mounted on /boot/efi
after the upgrade:
* systemd-gpt-auto-generator(8) is active, and mounts the (single) ESP
on /efi (because grub w/o secure-boot sets the needed efivar+it is
not mounted)
* the next upgrade of systemd-boot causes systemd-boot to be
installed on the ESP, but it will not get any kernels configured,
since we disabled the /etc/kernel/postinst.d/zz-systemd-boot in
PVE8.
so this patch further restricts the case were having systemd-boot
installed to the cases where p-b-t says it's used for booting.
Additionally raise the level from info to warn in the legacy-boot
case. and add a log_pass message that was added to the equivalent
check in pbs3to4[0]
[0] https://lore.proxmox.com/pbs-devel/20250811091135.127299-1-s.ivanov@proxmox.com/
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Link: https://lore.proxmox.com/all/20250814120807.2653672-1-s.ivanov@proxmox.com
Existing qcow2 volumes on a storage won't be handled correctly anymore
after the setting is turned off. The setting is already a fixed
storage setting for directory-based storages, so this is only relevant
for LVM. Could be improved by checking in the backend if there are any
qcow2 images and only allow turning it off if not, but this requires
changes to the on_update_hook() signature. Until then, warn in the
front-end.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/all/20250812082824.30625-1-f.ebner@proxmox.com
in mobile firefox, the text 'Mobile' is not followed by whitespace or a
forward slash, but rather a semicolon (';'). Instead of simply adding
that to the list of characters, change the regex to accept 'Mobile' only
when it's a word boundary, this should include
slashes/whitespace/semicolons and other things browser vendors might use
in their user agent strings.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Link: https://lore.proxmox.com/20250811131600.2886215-1-d.csapak@proxmox.com
Quoting the commit message from [0] verbatim:
thin_check v1.0.x reveals data block ref count issue that is not being
detected by previous versions, which blocks the pool from activation if
there are any leaked blocks. To reduce potential user complaints on
inactive pools after upgrading and also maintain backward compatibility
between LVM and older thin_check, we decided to adopt the 'auto-repair'
functionality in the --clear-needs-check-flag option, rather than
passing --auto-repair from lvm.conf.
[0]: eb28ab94
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/all/20250808140419.119992-3-f.ebner@proxmox.com
Quoting the commit message from [0] verbatim:
thin_check v1.0.x reveals data block ref count issue that is not being
detected by previous versions, which blocks the pool from activation if
there are any leaked blocks. To reduce potential user complaints on
inactive pools after upgrading and also maintain backward compatibility
between LVM and older thin_check, we decided to adopt the 'auto-repair'
functionality in the --clear-needs-check-flag option, rather than
passing --auto-repair from lvm.conf.
[0]: eb28ab94
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/all/20250808140419.119992-2-f.ebner@proxmox.com
Allows simpler selection in most terminals as it clarifies that the
trailing dot of the sentence is not part of the path.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit 2d2d93fcb4c65361584c62ce46a65569d8a7b0ab)
I had a multi-arch setup here where I set an option to make apt check
only for amd64 on the PVE repo, i.e. :
deb [arch=amd64] http://download.proxmox.com/debian/pve trixie pvetest
This was not detected by our regex and thus the pve8to9 did not
complain about the (for trixie) misspelled pvetest component.
Simply extend the regex to allow arbitrary string inside a bracket,
but do not add a match group for it, we would not do anything with it
for now anyway.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
some upgrades result in unbootable systems, which can be traced back
to grub being installed in BOOTX64.efi, but not being upgraded by
grub-install. Refer the cases to the output of
`proxmox-boot-tool refresh` as it has a sensible check logic for those
cases. Some affected systems printed the warning of proxmox-boot-tool,
but it was lost in the large output of the dist-upgrade.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Link: https://lore.proxmox.com/all/20250808124540.1490294-3-s.ivanov@proxmox.com
FG: rename variable, set boot_ok correctly in last if
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
The current logic of deciding if systemd-boot was manually setup by
the user (without proxmox-boot-tool), by checking
`bootctl is-installed` yields a false-positive after upgrading:
* systems which have the package installed (e.g. from our isos after
8.0), but do not use proxmox-boot-tool (LVM installs) will get
systemd-boot installed to /boot/efi upon upgrade
* after upgrading the check says that it's been explicitly setup.
Rather warn if the package is installed (unless proxmox-boot-tool is
used and the upgrade is still not done) in any case - as the number
of systems which have it setup manually are probably far lower than
those that upgrade without explicitly checking pve8to9.
Additionally increase the log from a warn to a fail, as issues with
boot-loaders yield unbootable systems, and move the (probably rare)
case of manual systemd-boot setups to the upgrade-guide
in our wiki, which is also linked in the output.
Finally fix a typo (s/remoing/removing/).
Reported-by: Daniel Herzig <d.herzig@proxmox.com>
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Link: https://lore.proxmox.com/all/20250808124540.1490294-2-s.ivanov@proxmox.com
...instead of re-implementing a custom check here. This has the
side-effect that the check implemented by pve-container is much more
robust and is less likely to yield false positive. So users won't get
warnings about containers that actually do have the required unified
cgroup v2 support.
This was reported for an OpenSUSE Slowroll container on the forum:
https://forum.proxmox.com/threads/169302/
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Link: https://lore.proxmox.com/20250807142246.341381-1-s.sterz@proxmox.com
[FE: rebase on top of make tidy]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The context is already established by the initial info log message, so
keep the pass simple to avoid adding more confusion.
Suggested-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Copy over manually as we do not use a #DEBHELPER# stanza here.
Suggested-by: Fabian Gründbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
There is a regression regarding the permission for the /run/pve
directory. In Proxmox VE 8, the directory had root:root 0755
permissions, being auto-created as the lxc-syscalld runtime directory.
In Proxmox VE 9, the permissions were restricted to root:root 0750,
but this leads to an issue with remote migration, when pveproxy tries
to access the mtunnel socket:
pveproxy[2484]: connect to 'unix/:/run/pve/ct-112.mtunnel' failed: Permission denied
Relax the permissions again by allowing the www-data group
read-access, so that pveproxy can access the socket.
This aligns the permissions with what /run/pve-cluster has.
Reported-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Hannes Laimer <h.laimer@proxmox.com>
Link: https://lore.proxmox.com/20250805100556.40874-1-f.ebner@proxmox.com
The old behavior was *always* reloading the network configuration,
which worked as long as FRR was not pre-installed, but the change in
db03d261 coupled with the fact that we now ship with FRR caused FRR to
be enabled on applying the network configuration via the Web UI.
The stop gap fix in e1b9466d solved this behavior, but had the issue
that we need to possibly regenerate the FRR configuration if the host
configuration changes, since some controllers generate their
configuration based on the host network configuration.
pve-network now always sends the regenerate-frr parameter, so we can
discern whether a request came from SDN or is a manual request that is
requesting a specific behavior. With this information the reloading
logic can be improved as follows:
* Honor the parameter if it is set
* reload only if there are any FRR entities in the SDN configuration
This should handle all cases that we need to consider:
* Do not overwrite existing FRR configurations, unless we need to
generate our own FRR configuration.
* Do not trigger a FRR enable when reloading the host configuration,
even though there is no FRR configuration.
* Overwrite the FRR configuration with an empty configuration if all
SDN entities using FRR got deleted.
* Regenerate the FRR configuration when the host network configuration
changes, since this might affect the generated FRR configuration.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Tested-by: Hannes Duerr <h.duerr@proxmox.com>
Tested-by: Gabriel Goller <g.goller@proxmox.com>
Reviewed-by: Gabriel Goller <g.goller@proxmox.com>
Link: https://lore.proxmox.com/20250805083504.55378-2-s.hanreich@proxmox.com
Our timer won't be triggered automatically on initial boot if the
current date is to near on the next scheduled run, and while it would
be nicer to create a dedicated service with a ConditionFirstBoot, this
seems a bit overkill for now, so just check if the pveam log exists,
and if not trigger a daily update, which will generate such a log.
Do so after pveproxy started in ExecStartPost and allow this to fail
by prefixing the call with -.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The rrd one is the one that returns a rendered PNG, the rrddata
returns the underlying data to be used by the frontend for modern
graphs, so that we will want to keep.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
so it's more subtle versus the current gray line.
I used the normal green color from 'total' but increased the lightness
value by 10 points (in HSL color space) to make the color brighter.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Link: https://lore.proxmox.com/20250805082906.939499-2-d.csapak@proxmox.com
In order to avoid conflicts and confusion with the standalone
proxmox-network-interface-pinning standalone tool, rename to
pve-network-interface-pinning. Addtionally, install to the
/usr/libexec/proxmox directory, which is now our preferred location
for shipping custom scripts / CLI tools.
The standalone tool will check for the existence of
pve-network-interface-pinning and invoke the PVE specific script if it
is installed on the host. This makes it possible for the tool to
properly run on hosts where PVE and PBS are both installed.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Link: https://lore.proxmox.com/20250730141610.282177-1-s.hanreich@proxmox.com
Bootstrap is now handled by pve-http-server, which uses the debian
provided package, so no need for a entry in d/copyright in pve-manager
in any case.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
We observed this in a host which has been updated through many major
versions (at least since PVE 4). The 70-persistent-net.rules was
used keeping the NICs with 'ethX' names. With the first reboot
into trixie the NICs got their predictable names, and networking was
broken (because the 'ethX' names are not present as altname the
support for this does not help in this case)
I could not reproduce the issue with a VM (there the
70-persistent-net.rules was still active and the NIC remained ethX),
so it might be a race during early boot.
In any case a warning makes sense here as it's becoming a very niche
combination, and thus is likely to cause more issues in the future.
Suggest to manually setup pinning as suggested in our documentation:
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#network_override_device_names
seems sensible.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Link: https://lore.proxmox.com/20250804183856.773378-1-s.ivanov@proxmox.com
The RRD migration tool currently has the limitation of not migrating
RRD files for storages with a '.old' suffix. Mention the list of such
storages below the RRD file list and migration command so that users
can adapt before executing the migration command.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250804125601.91944-3-f.ebner@proxmox.com
a few things changed in systemd-boot upstream packages we use as
for proxmox-boot-tool systems:
* systemd-boot was split up further into systemd-boot-tools (we need
`bootctl`) and `systemd-boot`(the meta-package which triggers
updates
* the ESPs updates now also run upon updates of shim(-signed) and
probably other boot-related packages. These triggered updated breaks
apt for systems booted by proxmox-boot-tool (more generally for
systems which don't have the ESP mounted).
This patch reworks our logic for checking:
* before upgrade the log message just reflects that we need
systemd-boot in bookworm
* for legacy booted systems we suggest removing `systemd-boot` (so it
does not cause more issues in the future, and is definitely not
needed for booting there
* for p-b-t we suggest to remove the meta-package
* for non-p-b-t we suggest to remove it as well, unless the system was
manually setup to use systemd-boot.
see the changes for proxmox-kernel-helper for further background:
https://lore.proxmox.com/all/20250731114455.995999-1-f.gruenbichler@proxmox.com/
minimally tested on a secure-boot enabled VM, and on one which uses
p-b-t with systemd-boot.
Co-Authored-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Link: https://lore.proxmox.com/20250801123804.2231830-1-s.ivanov@proxmox.com
Default to not regenerating the FRR configuration, unless explicitly
requested. Otherwise applying the host network configuration would
reload and enable the FRR service. Invert the boolean from skip to
regenerate, since the logic is less convoluted this way.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Link: https://lore.proxmox.com/20250804140152.423614-2-s.hanreich@proxmox.com
For things that might be good to know but where there's not much that
the user can do in any case.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Avoids a scary error when migrating a CT by using a new UI version but
the CT is currently on an old node.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
It looks rather odd to see the same size twice especially if one isn't
using ZFS in the first place, so use the simple heuristic of ARC
needing to be > 1 MiB to decide if we should render the ARC. Note that
the lowest possible minimum size for the ARC is 64 MiB, so this should
cover all real ZFS setups. FWIW, might also use a test that depends on
the order of magnitude difference between ARC size and memory used.
btw. this is also only theoretical true, with ZFS 2.3 the ARC is
registered for the kernel as reclaimable, so it might or might not
account to used, depending on the calculation and what the kernel
does, which changes frequently. If we get reports about this being odd
we should just remove it completely.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
instead of an area graph. Give it the same color as the 'hostmem' one
from the qemu guests (a neutral color that draws not too much
attraction).
While at it, order it last, since it was just ordered second so the
overlapping colors would not clash.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Link: https://lore.proxmox.com/20250804132928.1714525-2-d.csapak@proxmox.com
so one can see how much is available as an explicit graph.
The color is the same as the total, as most other colors produce weird
looking result since the graphs are transparent and overlay each other.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Link: https://lore.proxmox.com/20250804112842.904286-3-d.csapak@proxmox.com
the stacked graphs had more information, namely how much memory was
available and how much was used without the ZFS ARC. So put this
information into the tooltip where the users most likely want to see it.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Link: https://lore.proxmox.com/20250804112842.904286-2-d.csapak@proxmox.com
highly confusing if y-axis does not matches the metric and "jumps" if
metrics get toggled via the legend.
Keep the 'host memory' in vm graphs as a line, as otherwise the
overlapping colors make the graph also confusing, and keep it's
'hidden by default' logic.
While at it, use title case for the legend.
Originally-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Link: https://lore.proxmox.com/20250804112842.904286-1-d.csapak@proxmox.com
While it's indirectly included in how we calculate memused, it still
can be nice to have as dedicated value.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This mirrors commit 09d9dd535 ("ui: guest migrate: adapt HA migration
checks to altered property name"), which just fixed it for QEMU..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This avoids loading the data twice, which is annoying especially on
high-latency links. Further, it ensures both views are always backed
by the exact same data, avoiding inconsistent state and thus e.g.
confusing warnings if something changed in the backend between on of
both loads.
To implement this add a new model that derives from the main one but
uses an in-memory proxy. Then move the "real" store out to the parent
component, where we need to manually initialise it as ExtJS panel are
more generic compared to grids–which always got a backing store.
Anyway, in the parent add a listener to copy any data to the in-memory
stores of the child grids for each affinity rule type. In the child
grid's relay any store load–e.g., after adding/changing/deleting a
rule–to the parent.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
While it's great to auto-generate that by default, it should be still
(optionally) visible and maybe even per default, but then we need to
also allow overriding it on edit.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Avoid single-use intermediate variables and make use of arrow function
for simple cases where a ternary works well.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
No benefit in having this pre-created and assigned in a variable, in
our more modern components we normally try to have as much as possible
defined declaratively, and ideally avoid having an initComponent
completely–or at least keep it to the required minimum.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
No real benefit in doing this chained in the RulesView class, and
it causes lots of extra indentation.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Avoid strignifying the error here, that will most likely just make it
output "[object]", but rather pass it as additional parameter to
console.warn, which the makes it print as nice object that can be
interacted with.
Originally-by: Gabriel Goller <g.goller@proxmox.com>
[TL: split out of bigger patch that became obsolete]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
As otherwise one might be confused about what to do, e.g. if the only
tried the old IPAM once for testing.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
It's fine since PVE 8.3, as there we replaced those files with a
better system, but use 8.4 as that's required for the upgrade anyway.
Admins that want to clean these things up before the major upgrade can
know that it's safe to do.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Check if there are any not-yet-migrated ipam / mac cache files in
pmxcfs. Those should have been migrated over in the pve-network
postinst, but if something went wrong during this process we can
explicitly notify users here again to avoid any unpleasant suprises
after the upgrade.
If all nodes are on PVE 9, it is safe to delete the legacy files,
print a notice informing the users.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Link: https://lore.proxmox.com/20250801121029.201766-3-s.hanreich@proxmox.com
But cut off after 29 (well 30, I hate it when a list gets cut-off with
an entry like "one additional entry omitted", just print it instead of
that single cut-off entry!) to avoid spamming the log if there–why
ever–are still many files that were not migrated on huge setups.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the find command previously also found the already migrated rrd files
under `pve-{vm,node,storage}-9.0` and reported them as needing to
migrate them. the provided command to would of course not migrate them
so the warning persisted even after the command was run.
limit the find command to the old `pve2-` prefixed folders to prevent
that.
Reported-by: Friedrich Weber <f.weber@proxmoc.com>
Reported-by: Daniel Herzig <d.herzig@proxmox.com>
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Tested-by: Daniel Herzig <d.herzig@proxmox.com>
Tested-by: Michael Köppl <m.koeppl@proxmox.com>
Link: https://lore.proxmox.com/20250801091336.68133-1-s.sterz@proxmox.com
Pressure stall information are actually in percent. This is not
mentioned explicitly in the documentation, but e.g. in the kernel
source [1]:
> The percentage of wall clock time spent in those compound stall
> states gives pressure numbers between 0 and 100 for each resource,
> where the SOME percentage indicates workload slowdowns and the FULL
> percentage indicates reduced CPU utilization:
>
> %SOME = time(SOME) / period
> %FULL = time(FULL) / period
Thus, also display them as percent in the GUI.
This reverts commit 087af55863.
[1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/kernel/sched/psi.c?h=v6.16#n52
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20250731135147.138187-1-f.weber@proxmox.com
Extend the VM precondition check to show whether a migration of a VM
results in any additional migrations because of positive HA resource
affinity rules or if any migrations cannot be completed because of any
negative resource affinity rules.
In the latter case these migrations would be blocked when executing the
migrations anyway by the HA Manager's CLI and it state machine, but this
gives a better heads-up about this. However, additional migrations are
not reported in advance by the CLI yet, so these warnings are crucial to
warn users about the comigrated HA resources.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Link: https://lore.proxmox.com/20250730181428.392906-19-d.kral@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Extend the container precondition check to show whether a migration of a
container results in any additional migrations because of positive HA
resource affinity rules or if any migrations cannot be completed because
of any negative resource affinity rules.
In the latter case these migrations would be blocked when executing the
migrations anyway by the HA Manager's CLI and it state machine, but this
gives a better heads-up about this. However, additional migrations are
not reported in advance by the CLI yet, so these warnings are crucial to
warn users about the comigrated HA resources.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Link: https://lore.proxmox.com/20250730181428.392906-18-d.kral@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Add HA resource affinity rules as a second rule type to the HA Rules'
tab page as a separate grid so that the columns match the content of
these rules better.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Link: https://lore.proxmox.com/20250730181428.392906-17-d.kral@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
got replaced by a modern Yew and Proxmox Yew Widget toolkit based
mobile UI with more features and supporting all modern TFA variants.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
New pve-docs we now depend on supports a --allow-missing flag for the
asciidoc-pve scan-extjs command, which, if set, will silence errors
for missing onlineHelp references to warnings and fallback to a link
to the main single-page admin guide.
One can easily enable this by using `export ALLOW_MISSING=1` before
the build or bass that variable to make like:
make ALLOW_MISSING=1 OnlineHelpInfo.js
in the www/manager6 source directory.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Introduce HA rules and replace the existing HA groups with the new HA
node affinity rules in the web interface.
The HA rules components are designed to be extendible for other new rule
types and allow users to display the errors of contradictory HA rules,
if there are any, in addition to the other basic CRUD operations.
HA rule ids are automatically generated with a 13 character UUID string
in the web interface, as also done for other concepts already, e.g.,
backup jobs, because coming up with future-proof rule ids that cannot be
changed later is not that user friendly. The HA rule's comment field is
meant to store that information instead.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Link: https://lore.proxmox.com/20250730175957.386674-30-d.kral@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
As the HA groups' failback flag is now being part of the HA resources
config, it should also be shown there instead of the previous HA groups
view.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Link: https://lore.proxmox.com/20250730175957.386674-29-d.kral@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Remove the HA group column from the HA Resources grid view and the HA
group selector from the HA Resources edit window, as these will be
replaced by semantically equivalent HA node affinity rules in the next
patch.
Add the field 'failback' that is moved to the HA Resources config as
part of the migration from groups to node affinity rules.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Link: https://lore.proxmox.com/20250730175957.386674-28-d.kral@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
ZFS is a complex stack, knowing that the metric shows the ARC size can
help understanding this.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Avoid rushing this, besides that it could be de-duplicated code-wise,
and might even go into widget-toolkit so that on use-site one only
passes a config option with the respective (get)text along, at least
if this should become a generic concept (which would be good to avoid
making it feel one-off "tacked-on).
It might also look better when being directly besides the title (on
the right side of that), not the legend.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
As the new RRD files are quite a bit larger than the old ones, we should
check if the estimated required space is actually available and let the
users know if not.
Secondly, it could be possible that a new resource is added while the
node is migrating the RRD files. Therefore, there could be some left not
migrated to the new format. Therefore, check for that too and let the
user know how they can migrate the remaining RRD files.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
by adding the new memhost field, which is populated for VMs, and
using it if the guest is of type qemu and the field is numerical.
As a result, if the cluster is in a mixed PVE8 / PVE9 situation, for
example during a migration, we will not report any host memory usage, in
numbers or percent, as we don't get the memhost metric from the older
PVE8 hosts.
Fixes: #6068 (Node Search tab incorrect Host memory usage %)
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
They were missing and just showed the actual field names.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
This way, we can provide a bit more context to what the graph is
showing. Hopefully making it easier for our users to draw useful
conclusions from the provided information.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
We switch the memory graph to a stacked area graph, similar to what we
have now on the node summary page.
Since the order is important, we need to define the colors manually, as
the default color scheme would switch the colors as we usually have
them.
Additionally we add the host memory view as another data series. But we
keep it as a single line without fill. We chose the grey tone so that is
works for both, bright and dark theme.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
With the new memhost field, the vertical space is getting tight. We
therefore reduce the height of the separator boxes.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Pressures are indicatios that processes needed to wait for their
resources. While 'some' means, that some of the processes on the host
(node summary) or in the guests cgroup had to wait, 'full' means that
all processes couldn't get the resources fast enough.
We set the colors accordingly. For 'some' we use yellow, for 'full' we
use red.
This should make it clear that this is not just another graph, but
indicates performance issues. It also sets the pressure graphs apart
from the other graphs that follow the usual color scheme.
Originally-by: Folke Gleumes <f.gleumes@proxmox.com>
[AL:
* rebased
* reworked commit msg
* set colors
]
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
To display the used memory and the ZFS arc as a separate data point,
keeping the old line overlapping filled line graphs won't work
anymore. We therefore switch them to area graphs which are stacked by
default.
The order of the fields is important here as it affects the order in the
stacking. This means we also need to override colors manually to keep
them in line as it used to be.
Additionally, we don't use the 3rd color in the default extjs color
scheme, as that would be dark red [0]. We go with a color that is
different enough and not associated as a warning or error: dark-grey.
[0] https://docs.sencha.com/extjs/7.0.0/classic/src/Base.js-6.html#line318
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
The new columns we get from RRD are added.
Since we are switching the memory graphs to stacked graphs, we need to
handle them a bit different because:
* gaps are not possible, we need to have a value, ideally 'null' when
there is no data, makes it easier to handle in the tooltip
* calculate some values and not take the ones received from RRD.
Otherwise the memory graphs can be _wobbly_. For example if we take
the node memory where we have memused + arcsize + memavailable. Those
will not always line up perfectly from the gathered data to match the
total physical memory. Similar for the memory graph for guests.
The values we calculate are for nodes:
* memused-sub-arcsize: because the arcsize is included in memused, but
we want to show it as a separate part of the graph if we do have that
information.
If we don't have the arcsize values (older node for example), we set
it to 0.
* memfree-capped: instead of memavailable we calculate the free memory
to avoid memory graphs that have wobbles and spikes due to timing
differences when gathering the data.
For guests:
* memfree-capped: We cannot just have two line graphs, but will stack
them as well to match the node graph. Therefore we need to subtract
the memused from maxmen, so that in total, both data lines will result
in maxmem.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
if the new rrd pve-node-9.0 files are present, they contain the current
data and should be used.
'decade' is now possible as timeframe with the new RRD format.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
If we see that the migration to the new pve-{type}-9.0 rrd format has been done
or is ongoing (new dir exists), we collect and send out the new format with additional
columns for nodes and VMs (guests).
Those are:
Nodes:
* memfree
* arcsize
* pressures:
* cpu some
* io some
* io full
* mem some
* mem full
VMs:
* memhost (memory consumption of all processes in the guests cgroup -> host view)
* pressures:
* cpu some
* cpu full
* io some
* io full
* mem some
* mem full
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
By using the mtime of the parent directory where all locale i18n files
are located we ensure that every time a file is added/replaced/deleted
we will get a new value.
As both dpkg and install recreate existing files this should always
result in such an mtime update when either installing a new package or
directly installing the build artefacts to the root system
(circumventing dpkg).
While the explicit version would be slightly nicer, it's only relevant
for debugging and can be correlated with the local file and the one
from the debian package – which always has the time stamp from
the current d/changelog entry.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Adds a new checkbox to the migration dialog, if it is a
live/online-migration and both the source and target nodes have support
for our dbus-vmstate helper.
If the checkbox is active, it passes along the `with-conntrack-state`
parameter to the migrate API call.
Reviewed-by: Stefan Hanreich <s.hanreich@proxmox.com>
Tested-by: Stefan Hanreich <s.hanreich@proxmox.com>
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Link: https://lore.proxmox.com/20250730094549.263805-11-c.heiss@proxmox.com
This endpoint provides information about migration capabilities of the
node. Currently, only support for dbus-vmstate is indicated.
Tested-by: Stefan Hanreich <s.hanreich@proxmox.com>
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Link: https://lore.proxmox.com/20250730094549.263805-10-c.heiss@proxmox.com
The 'maxfiles' parameter has been deprecated since the addition of
'prune-backups' in the Proxmox VE 7 beta.
Drop the tests that only had maxfiles or both, but adapt the mixed
tests for CLI/backup/storage precedence.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250718125408.133376-5-f.ebner@proxmox.com
Currently, guest replication is guarded with Datastore.Allocate on
'/storage', which is rather surprising. One could require
Datastore.AllocateSpace on all involved storages, but having a
dedicated privilege like for other VM operations like migration and
snapshot seems to be more natural.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
FG: add versioned dependency
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
The RPCEnvironment's check() method is used without $noerr, so it will
already fail and raise a permission exception when the privilege is
missing.
The usage in the job_status endpoint can be simplified, as the
raise_perm_exc() there is dead code.
The other two usages actually want to set the $noerr argument. In
particular, this makes it possible to use the 'status' endpoint, when
the user does not have VM.Audit for all guests with a replication job
and to read the log with only Sys.Audit privilege on the node. Both
would previously fail, because the check for VM.Audit would raise an
exception already.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The RPCEnvironment's check() method is used without $noerr, so it will
already fail and raise the proper permission exception when the
privilege is missing.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
pvestatd runs the update loop for every network device present on the
host. This includes many dynamically created / removed network
interfaces (tap devices, veths, fw bridges). This triggers a warning
if the network device isn't yet in the cached version of the ip link
output:
pvestatd[1011]: Use of uninitialized value in string eq at /usr/share/perl5/PVE/Network.pm line 998.
Silently fail such errors for now, since they should normally affect
only such dynamically created devices and no physical devices. Even
for hotplugged physical devices this should be corrected the next time
the ip link cache gets invalidated. The worst case scenario in this
case is at most 15 minutes of missing netin/out metrics for that link.
Reported-by: Hannes Dürr <h.duerr@proxmox.com>
Reported-by: Max Carrara <m.carrara@proxmox.com>
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Link: https://lore.proxmox.com/20250730124451.184806-1-s.hanreich@proxmox.com
The pve-lxc-syscalld systemd service currently uses /run/pve as a
runtime directory. This means, that when the service is restarted, the
directory will be recreated. But the /run/pve directory is not just
used as the runtime directory of this service, but also for other
things, e.g. storage tunnel and mtunnel sockets, container stderr logs
as well as pull metric cache and lock, which will be lost when the
service is restarted.
The plan is to give the service its own runtime directory that is only
used for that purpose and nothing else. However, this means the
/run/pve directory will not get created automatically anymore (e.g.
pull metric relies on the existence already). Add this tmpfiles.d
configuration to create it automatically again. Note that the
permissions/owner are different now. As the runtime directory, it was
created with 0755 root:root. This tmpfiles configuration
changes this to 0750 root:root.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250723144131.170616-2-f.ebner@proxmox.com
This ensures the installed Perl based http api-server supports WASM
and MO files (for Yew mobile GUI) and understands the new MAX_WORKER
daemon config setting.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The number of pvedaemon worker processes is currently hardcoded to 3.
This may not be enough for automation-heavy workloads that trigger a
lot of API requests that are synchronously handled by pvedaemon.
Hence, read /etc/default/pvedaemon when starting pvedaemon and allow
overriding the number of workers by specifying MAX_WORKERS in this
file. All other values are only relevant for pveproxy/spiceproxy and
thus ignored.
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20250729155227.157120-4-f.weber@proxmox.com
The number of pveproxy worker processes is currently hardcoded to 3.
This may not be enough for automation-heavy workloads that trigger a
lot of API requests that are synchronously handled by pveproxy.
Hence, allow specifying MAX_WORKERS in /etc/default/pveproxy to
override the number of workers.
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20250729155227.157120-3-f.weber@proxmox.com
To ensure that, e.g., ip_link_details is available on build and that
the new flexible physical interface naming can work on usage, that's
why it's different versions for build and use.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Without this a `udevadm verify` execution warns about:
70-virtual-function-pinning.rules:1 style: whitespace after comma is expected.
70-virtual-function-pinning.rules:1 style: whitespace after comma is expected.
70-virtual-function-pinning.rules:1 style: whitespace after comma is expected.
70-virtual-function-pinning.rules: udev rules have style issues.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
While the commit adding this cache is right in that this data probably
won't change at all that often, adding a simple time based expiration
of the cache is not much extra code, and the normally 96 fork+exec of
ip link per day should really not be noticeable at all.
FWIW, if we wanted to make this more efficient and reduce the latency
on changes, we could hook into udev and touch some in-memory flag file
to signal pvestatd that it should requery this info. As this is an
internal mechanism, we can change it any time, so can still be done in
the future.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
With the changes to physical interface detection in pve-common and
pve-manager, it is now possible to use arbitrary names for physical
interfaces in our network stack. This allows the removal of the
existing, hardcoded, prefixes.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Link: https://lore.proxmox.com/20250729171649.708219-4-s.hanreich@proxmox.com
pve-common now allows arbitrary names for physical interfaces, without
being restricted by PHYSICAL_NIC_RE. In order to detect physical
interfaces, pvestatd now needs to query 'ip link' for the type of an
interface instead of relying on the regular expression.
On the receiving end, PullMetric cannot consult 'ip link' for
determining which interface is physical or not. To work around that,
introduce a new type key, that carries information about the type of
an interface. When aggregating the metrics, PullMetric can now read
this additional parameter, to infer the type of the interface (either
physical or virtual).
To avoid spawning a process in every update loop of pvestatd, cache
the output once and then use it throughout the lifecycle of pvestatd.
Physical interfaces rarely get added / removed without a reboot, so we
can cache this indefinitely. Users can always restart the service to
refresh the information about physical interfaces.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Link: https://lore.proxmox.com/20250729171649.708219-3-s.hanreich@proxmox.com
Just for initial testing, this way one can relatively easily switch
back to the old UI for comparison (of how bad it's state is).
While I really do not think this is strictly required, it allows
moving along a newer pve-manager without the Yew PVE GUI package, so
can be useful.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Use the new mobile web UI written in rust with yew and our yew widget
toolkit as replacement for the old sencha-touch based gui.
The new mobile UI is oriented on our Flutter based app for basic style
an widgets.
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
Link: https://lore.proxmox.com/20250722072256.3168428-1-dietmar@proxmox.com
[TL: raise d/control version to current, reword commit message a bit]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
previously the help button would disappear once either the
"Notifications" or "Retention" tabs was opened. this removes an
unnecessary extra container and sets the value for all tab so that the
help button stays present.
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Link: https://lore.proxmox.com/20250718133846.101784-1-s.sterz@proxmox.com
This patch adds OpenTelemetry metrics collection to the PVE manager
to improve observability and monitoring capabilities.
The implementation includes:
- OTLP/HTTP JSON protocol support for OpenTelemetry Collector
- Comprehensive metrics collection for nodes, VMs, containers, and storage
- Batching with configurable size limits and compression
- Full compliance with OpenTelemetry v1 specification
Technical features:
- Server/port configuration with HTTP/HTTPS protocol support
- Gzip compression with configurable body size limits (default 10MB)
- Custom HTTP headers (Bearer tokens, API keys)
- Resource attributes support (Unicode support)
- Timeout control and SSL certificate verification options
- Recursive metrics conversion supporting all PVE data types
This plugin also implements proper metric type classification to
distinguish between:
Counter metrics (cumulative values):
- Network traffic: transmit, receive, netin, netout
- Disk I/O: diskread, diskwrite
- Block operations: *_operations, *_merged
- CPU time (cpustat context): user, system, idle, iowait, etc.
Gauge metrics (instantaneous values):
- Memory usage, CPU percentages, storage space, etc.
It should be also compatible with Prometheus.
Signed-off-by: Nansen Su <nansen.su@sianit.com>
Link: https://lore.proxmox.com/20250722095526.4164885-2-nansen.su@sianit.com
[TL: re-format code to fix various white space & indentation errors]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
As otherwise one has no feedback from the info levels that the tests
are not applicable for one at all, thus explicitly skip for the case
where there is no RBD storage at all or if there are only
PVE-managed ones.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
With the switch from QEMU's -drive to -blockdev, it is not possible
anymore to pass along the Ceph 'keyring' option via the QEMU
commandline anymore, as was previously done for externally managed RBD
storages. For such storages, it is now necessary that the 'keyring'
option is correctly configured in the storage's Ceph configuration.
For newly created external RBD storages in Proxmox VE 9, the Ceph
configuration with the 'keyring' option is automatically added, as
well as for existing storages that do not have a Ceph configuration
at all yet. But storages that already got an existing Ceph
configuration are not automatically handled by the storage layer in
Proxmox VE 9, the check and script here covers those as well.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Max R. Carrara <m.carrara@proxmox.com>
Tested-by: Max R. Carrara <m.carrara@proxmox.com>
Link: https://lore.proxmox.com/20250716124716.104765-1-f.ebner@proxmox.com
If a specific interface is specified via the interface parameter,
users can now additionally specify a target-name. This makes it easier
for users to assign specific names to specific interfaces, according
to their preferences.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Link: https://lore.proxmox.com/20250724093459.76397-4-s.hanreich@proxmox.com
Instead of printing a separate line for each altname, the tool now
only prints one line per physical interface. The primary name is used
as an identifier and the altnames are printed additionally in
parentheses (if they exist). Additionally, the output is now sorted by
ifindex (just as the pin order), so interfaces should now be printed
in ascending order.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Link: https://lore.proxmox.com/20250724093459.76397-3-s.hanreich@proxmox.com
While ifindex is not guaranteed to be stable across reboots, it seems
like a good enough heuristic for making sure interfaces with multiple
ports are clamped together when pinning.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Link: https://lore.proxmox.com/20250724093459.76397-2-s.hanreich@proxmox.com
Currently, the first character can also be a digit, '.', '-', or '_'.
Almost all other configuration IDs in Proxmox VE require starting with
a letter, so force this for new pool names too.
A pool with ID '0' can be added, but not parsed, because it will
evaluate to false in PVE/AccessControl.pm's parse_user_config():
> if (!verify_poolname($pool, 1)) {
> warn "user config - ignore pool '$pool' - invalid characters in pool name\n";
> next;
> }
It's likely that it would cause other issues as well if properly
handled there.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250724083205.15646-1-f.ebner@proxmox.com
In some cases, autoactivation was spelled as "auto-activation". Remove
the dash for consistency with other occurrences and the LVM
documentation.
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20250728152400.972595-1-f.weber@proxmox.com
The key in the ip link output is actually called linkinfo. Before this
patch, members of bond interfaces that inherit the MAC address of the
bond would have a wrong MAC in their generated .link file.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Link: https://lore.proxmox.com/20250722151452.378352-1-s.hanreich@proxmox.com
Since the renaming of virtual functions is now handled dynamically via
a custom udev rule, proxmox-network-interface-pinning needs to
additionally filter all existing virtual functions when pinning
network interface names, since otherwise they would also get a pinned
name.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Link: https://lore.proxmox.com/20250722145223.351778-3-s.hanreich@proxmox.com
This commmit adds a udev rule that triggers for every network device
that gets added. It checks if the network device is a VF and if the
parent device is pinned. If it is pinned, then generate a new name for
the VF which consists of the pinned name of the parent device, as well
as the index of the VF.
It relies on the network device driver exposing the information via
sysfs, which was the case in my tests for mlx5_core, igb and bnxt_en.
Specifically it checks if a device is a virtual function by checking
for the existence of:
/sys/class/net/<iface>/device/physfn
It then follows that symlink and infers the vf index by looking at the
virtfnX symlinks in the folder above.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Link: https://lore.proxmox.com/20250722145223.351778-2-s.hanreich@proxmox.com
sequioa dropped the `--binary` flag for keyring generation and only
outputs ascii armored keyrings. so make it dearmor the key afterward
to keep the key format consistent with the file ending (".gpg") again.
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Link: https://lore.proxmox.com/20250722123658.196232-2-s.sterz@proxmox.com
debian trixie based installs don't ship with gpgv so take this
opportunity and use sqv directly. sqv can deal with both armored and
dearmored keys. this has the side-effect of closing #6539. which
occured, due to sequioa dropping the `--binary` option for merging
keys into a keyring and would always output them in an armored
formart. gpgv cannot handle armored keys and would therefore fail to
verify signatures.
while sqv is pre-installed, adding it as an explicit dependency should
still avoid problems if it is removed at some point (like gpgv was).
Closes: #6539
Signed-off-by: Shannon Sterz <s.sterz@proxmox.com>
Link: https://lore.proxmox.com/20250722123658.196232-1-s.sterz@proxmox.com
Since postfix (3.9.1-7) the postfix@- is gone again and the non-
templated postfix.service is back, so cope with that here.
Closes: #6537
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
And the links too, as otherwise this rather easy to overlook, and
there's no downside in using more lines.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
... and early exit for the odd case that there is no lvm config, no
point in failing the manager postinst then, as nothing can be filtered
anyway.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The old code worked but used a bit hard to read syntax in the sed
command that even confused my vim syntax highlighting into thinking
the quote is still open for the rest of the file.
Split into single and double quotes as needed, this also separates
backslash escaping for shell and for sed more clearly.
While at it ensure the new marker gets added, as otherwise one might
gain two entries.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
As for upgrades (or installations on top of Debian) a user will
already have a valid PVE 9 test repo setup upfront, as otherwise they
could not do the upgrade anyway.
Report: https://forum.proxmox.com/threads/168619/post-784373
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
We would regenerate all configuration files, even if no interface
would be mapped. While this shouldn't cause an issue, it's
unnecessary, has potential for creating bugs and leads to confusing
output for the users - so just abort in this case instead.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Link: https://lore.proxmox.com/20250718162610.443895-3-s.hanreich@proxmox.com
The fix introduced in 5b5db0e67 fixed the scenario where pins were
already applied, but broke the case where they weren't yet applied,
since now not-yet-applied pins would get overwritten again on
subsequent invocations of the pinning tool.
Move the check for the same name to the update_etc_network_interfaces
call, where it is actually required - instead of filtering already in
the resolve_pinned function, which is also used in the generate body
to filter eligible links for pinning.
Fixes: 5b5db0e67
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Link: https://lore.proxmox.com/20250718162610.443895-2-s.hanreich@proxmox.com
We add a new function to handle different key names, as it would
otherwise become quite unreadable.
It checks which key format exists for the type and resource:
* the old pve2-{type} / pve2.3-vm
* the new pve-{type}-{version}
and will return the one that was found. Since we will only have one key
per resource, we can return on the first hit.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Link: https://lore.proxmox.com/20250715143218.1548306-16-a.lauterer@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
pve2.3-vm has been introduced with commit 3b6ad3ac back in 2013. By now
there should not be any combination of clustered nodes that still send
the old pve2-vm variant.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Link: https://lore.proxmox.com/20250715143218.1548306-15-a.lauterer@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Check for any changes between the running config and the currently
applied config and guard against executing pve-sdn-commit if the
configuration is unchanged.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Link: https://lore.proxmox.com/20250718123313.208460-4-s.hanreich@proxmox.com
Querying the VM IP requires VM.GuestAgent.Audit now and accessing the
QEMU HMP monitor requires Sys.Audit.
Reported-by: Max Carrara <m.carrara@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250718115527.105844-1-f.ebner@proxmox.com
A fabric name has a character limit of 8 characters. (This is due to the
dummy interface, which is named 'dummy_<fabric_name>' which is limited
by the max interface name length in the kernel.) This shows a nicer
error message instead of only "Invalid character".
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Link: https://lore.proxmox.com/20250718120213.325141-1-g.goller@proxmox.com
This was overlooked when moving some checks to a new virtual guest
specific section.
Adapt the name for the list of node-local VM & CT IDs to differentiate
them from the cluster wide list.
Fixes: a3b63156a ("8to9 upgrade checks: add dedicated section for virtual guest checks")
Reported-by: Friedrich Weber <f.weber@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The API returns an object where a format can map to 0 (unsupported) or
1 (supported). However, when deciding whether to disable the format
selector by counting the number of valid formats, the GUI only took
the number of entries into account, not the actual values, so it would
show the format selector even though there is only one entry with
value 1. Fix this by taking the values into account when counting the
number of valid formats.
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20250718102532.61149-1-f.weber@proxmox.com
[TL: run through proxmox-biome format]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
As the Misc. section gets rather crowded and virtual guests are
definitively "worth" their own section.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
PVE 9 will automatically set the host_mtu parameter for VMs to the
bridge MTU if the MTU field of the network device is unset. Check for
any interface that would be affected by a change in MTU after the
upgrade.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Link: https://lore.proxmox.com/20250717175018.606662-1-s.hanreich@proxmox.com
These checks were only relevant for the upgrade to PVE 8 and the
messages talking about a new PVE namespace or dropped
Permission.Modify privilege do not apply anymore.
Keep the infrastructure for checking custom roles intact for future
checks.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250717133711.84715-7-f.ebner@proxmox.com
We generate a list of existing pins as a reference throughout the
pinning tool. It works by reading the existing link files and looking
up interfaces with the corresponding MAC address. If pins have already
been applied, this would return a mapping of the pinned name to itself
(nic0 => nic0).
We use this list for filtering what we write to the pending
configuration, in order to avoid re-introducing already pinned names
to the pending configuration. This reflexive entry would cause the
interfaces file generation to filter all pinned network interfaces
after reboot, leading to invalid ifupdown2 configuration files. Fix
this by filtering entries in the existing-pins list who are reflexive.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Link: https://lore.proxmox.com/20250717152841.397830-7-s.hanreich@proxmox.com
lock_file sets the error variable in perl, but does not die if it
encounters an error in the callback. All other invocations of
lock_file already die, but it was missing for writing the interfaces
s file, which swallowed errors.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Link: https://lore.proxmox.com/20250717152841.397830-6-s.hanreich@proxmox.com
The fabric configuration references interfaces of nodes, which need to
be updated when pinning network interface names as well. Use the new
helper provided by perlmod for that.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Link: https://lore.proxmox.com/20250717152841.397830-5-s.hanreich@proxmox.com
The API for generating the SDN configuration has been changed in the
fabrics patch series ('387cc48'). Use the new API to commit the SDN
configuration on boot, since otherwise the one-shot service fails to
apply the SDN configuration on boot.
The service was also missing an ifreload, since the ifupdown2 config
gets regenerated by SDN and needs to be applied before generating the
FRR configuration in order for the FRR config generation to work
properly.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Link: https://lore.proxmox.com/20250717152841.397830-4-s.hanreich@proxmox.com
Increase the width of the OSPF and OpenFabric add and create windows.
This looks more pleasant and the input fields aren't that crammed
anymore.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Link: https://lore.proxmox.com/20250717120639.401311-1-g.goller@proxmox.com
Leaving the MTU field unset now defaults to the bridge MTU, rather
than 1500. Reflect this change by indicating the new behavior in the
emptyText of the MTU field. While we're at it, add a gettext call so
it can be translated. I've taken the same text as from the container
dialogue, so it should already use existing translations.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Link: https://lore.proxmox.com/20250717175012.606372-3-s.hanreich@proxmox.com
We also changed the name of the option itself while we still had the
chance (no public release yet).
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Coming up with a good and self-explanatory terminology for this is
rather hard, but the previous one was definitively wrong.
So, name it with the following in mind:
1. avoid using "qcow2", as
- that might confuse users in thinking the feature uses qcow2's
built-in snapshot support, which it does NOT, it rather uses just
qcow2's backing chain (layering) capability–simplified–for
providing a unified view for the different volumes a snapshot
consists of.
- this is not strictly bound to qcow2, any format that supports
layering would be possible to use in theory, e.g. vmdk, it just
has no upside for us doing so, and qcow2 is one of the best
supported formats in QEMU, that's .
2. Use somewhat unique terms, if we can stick to them everybody can
easily know what one is talking about if they are used somewhere,
or if one encounters a users or potential customer that wants
basically this feature one can provide them just these terms and
they can find the relevant docs.
"Volume-Chain Snapshots" or "Snapshots as Volume-Chain" fulfil those
points and are definitively better than the status quo.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Only allow changing external-snapshots for existing storages when type
is LVM, as for the directory based ones the option is marked as
`fixed` in the schema, which means it cannot be changed for existing
configuration entries.
While at it replace the defaultValue with checked, as the former is
not really used any way when using deleteEmpty.
Reported-by: Lukas Wagner <l.wagner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Glusterfs has been deprecated with PVE9, so is no longer supposed to
be installed. The output of `pveversion -v` however still lists it.
Therefore, drop it from the list of packages to avoid friction for
users.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Link: https://lore.proxmox.com/20250717072834.119580-1-c.ebner@proxmox.com
This allows to distinguish, for example, the following three cases:
- The pvedaemon service is not responding (595)
- The pveproxy service is not responding (no error status code)
- The authentication credentials are not correct (401)
Since different combinations of wrong password and/or wrong user all
report the same error message: "authentication failure (401)" this does
not leak login information.
Another consideration is that the code `resp.status` does not match the
HTTP code seen in the API (a failed authentication has a return code 200
from the point of view of the browser) and its information is lost
without this change.
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Link: https://lore.proxmox.com/20250424134423.364697-1-m.sandoval@proxmox.com
[TL: resolve merge conflict due to code base reformatting]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
in api listing for all interfaces, and in the GET call for an individual
interface.
For intefaces that are already using an 'altname' in the iface stanza,
we look up the 'legacy' interface name and the remaining altnames and
show those.
If we don't show them, the user might be confused if the bridge ports
and interface names don't correlate.
enables the additional alternative names column in the network view on
the gui.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Link: https://lore.proxmox.com/20250715090749.1608768-3-d.csapak@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Not verified and cross-checked in depth, it's rather to much for that,
but rather best-effort.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
VXLAN zones can now use fabrics instead of having to specify peers
manually. Since the network selector doesn't implement deleteEmpty,
we have to manually handle deleted properties in the VXLAN input
panel.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Link: https://lore.proxmox.com/20250716130837.585796-75-g.goller@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Expose the new fabric field added to the EVPN controller in the UI.
Users can now select any fabric in the EVPN controller, instead of
having to specify peers manually. This simplifies setting up an EVPN
zone via SDN fabrics considerably.
Since the peers field can now be empty, we have to adapt the existing
field to allow empty values and properly send the delete property when
updating a controller.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Link: https://lore.proxmox.com/20250716130837.585796-74-g.goller@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Pass the 'include_sdn' type to the network selectors used in the
datacenter migration settings panel, as well as the ceph wizard, to
enable users to select SDN Vnets, as well as fabrics in the UI.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Link: https://lore.proxmox.com/20250716130837.585796-73-g.goller@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In order to be able to show SDN networks in the network selector
dropdowns, we introduce a new type ('include_sdn') to the API endpoint
that lists network interfaces of a node. The return value for existing
parameters stays unchanged to preserve backwards-compatibility.
Callers have to explicitly pass the new type if they want SDN networks
included in the response as well. Only fabrics for which the current
user has any SDN permission (Audit/Use/Modify) are listed.
There is also a new type that only lists fabrics ('fabric'), which
works analogous to the current type filters.
There was a separate type for vnets as well, that is not used anywhere
but was defunct due to a missing check in the endpoint. This has now
been fixed and supplying vnet as the type should now only return
vnets.
This commit is preparation for integrating the fabrics with several
parts in the UI, such as the Ceph installation wizard and the
migration settings, which use the pveNetworkSelector component that
uses this endpoint to query available network interfaces.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Link: https://lore.proxmox.com/20250716130837.585796-72-g.goller@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Remove line-break when showing the current status of SDN configuration
objects. Otherwise the column would contain an additional newline,
making the row too large.
Co-authored-by: Gabriel Goller <g.goller@proxmox.com>
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Link: https://lore.proxmox.com/20250716130837.585796-70-g.goller@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
TreeView that shows all the fabrics and nodes in a hierarchical
structure. It also shows all the pending changes from the
running-config. From here all entities in the fabrics can be added /
edited and deleted, utilizing the previously created EditWindow
components for Fabrics / Nodes.
We decided against including all the interfaces (as children of nodes
in the tree view) because otherwise the indentation would be too much
and detailed information on the interfaces is rarely needed, so we
only show the names of the configured interfaces instead.
Co-authored-by: Stefan Hanreich <s.hanreich@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Link: https://lore.proxmox.com/20250716130837.585796-69-g.goller@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Extends the common FabricEdit component and adds the OSPF-specific
items to it.
Co-authored-by: Stefan Hanreich <s.hanreich@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Link: https://lore.proxmox.com/20250716130837.585796-68-g.goller@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Add a component that extends the common FabricEdit component and adds
the OpenFabric-specific items to it. Those are currently the Hello
Interval and CSNP interval, which can be configured globally for all
members of the fabric.
Since OSPF currently does not provide IPv6 support (yet), we also move
the IPv6 prefix to the Openfabric edit panel, to avoid showing the
IPv6 prefix input field in the OSPF fabric edit panel.
As we don't enable IPv6 forwarding globally if a IPv6 fabric is created, show
a big warning that a use has to enable it manually.
Co-authored-by: Stefan Hanreich <s.hanreich@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Link: https://lore.proxmox.com/20250716130837.585796-67-g.goller@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Add generic base component to add and edit Fabrics, which contains the
fields required for every protocol. The properties for every protocol
are stored in different components and each extend this one.
Co-authored-by: Stefan Hanreich <s.hanreich@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Link: https://lore.proxmox.com/20250716130837.585796-66-g.goller@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Extend the generic NodeEdit panel for OSPF. Currently there are no
node-specific properties for OSPF, so leave the additionalItems empty.
Co-authored-by: Gabriel Goller <g.goller@proxmox.com>
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Link: https://lore.proxmox.com/20250716130837.585796-65-g.goller@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Extend the common NodeEdit panel with the Openfabric specific
properties. While IPv6 is a property that can be configured on all
nodes in the config, it is currently not supported for OSPF so we only
show it for Openfabric nodes.
Co-authored-by: Gabriel Goller <g.goller@proxmox.com>
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Link: https://lore.proxmox.com/20250716130837.585796-64-g.goller@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This component is the base EditWindow for Nodes of all protocols.
It utilizes the existing network endpoint for getting information on
the interfaces of the nodes, as well as the existing pveNodeSelector
component for displaying a node dropdown. In the future we could
provide a single endpoint that accumulates that information
cluster-wide and returns it, eliminating the need for multiple API
calls.
If the node is configured but currently not in the quorate partition,
we show a read-only panel with the current configuration and a warning.
Co-authored-by: Stefan Hanreich <s.hanreich@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Link: https://lore.proxmox.com/20250716130837.585796-63-g.goller@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Define an OSPF-specific InterfacePanel for future use (currently there
are no protocol-specific properties for OSPF interfaces).
Co-authored-by: Stefan Hanreich <s.hanreich@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Link: https://lore.proxmox.com/20250716130837.585796-62-g.goller@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This component extends the InterfacePanel and adds Openfabric specific
form fields. Hello Multiplier is hidden by default, but can be
activated in the column settings of the DataGrid.
Co-authored-by: Stefan Hanreich <s.hanreich@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Link: https://lore.proxmox.com/20250716130837.585796-61-g.goller@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Implements a shared interface selector panel for openfabric and ospf
fabrics. This GridPanel combines data from two sources: the node
network interfaces (/nodes/<node>/network) and the fabrics section
configuration, displaying a merged view of both sources.
It implements the following warning states:
- When an interface has an IP address configured in
/etc/network/interfaces, we display a warning and disable the input
field, prompting users to configure addresses only via the fabrics
interface
- When addresses exist in both /etc/network/interfaces and
/etc/network/interfaces.d/sdn, we show a warning without disabling
the field, allowing users to remove the SDN interface configuration
while preserving the underlying one
Co-authored-by: Gabriel Goller <g.goller@proxmox.com>
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Link: https://lore.proxmox.com/20250716130837.585796-60-g.goller@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Add the three model definitions for SDN fabrics in a shared Common
module, so they can be accessed by all UI components for the SDN
fabrics.
Co-authored-by: Gabriel Goller <g.goller@proxmox.com>
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Link: https://lore.proxmox.com/20250716130837.585796-59-g.goller@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
With the introduction of fabrics, frr configuration generation and
etc/network/interfaces generation has been reworked and renamed for
better clarity, since now not only zones / controllers are responsible
for generating the ifupdown / FRR configuration. Switch this endpoint
over to use the new functions.
We also add a new skip_frr parameter that skips FRR config generation
if set. With the old FRR config generation logic, we never wrote an
empty FRR configuration if all controllers got deleted. This meant
that deleting all controllers still left the previous FRR
configuration on the nodes, never disabling BGP / IS-IS. The new logic
now writes an empty configuration if there is no controller / fabric
configured, fixing this behavior. This has a side effect for users
with an existing FRR configuration not managed by SDN, but utilizing
other SDN features (zones, vnets, ...). Their manual FRR configuration
would get overwritten when applying an SDN configuration. This is
particularly an issue with full-mesh Ceph setups, that were set up
according to our Wiki guide [1]. User with such a full-mesh setup
could get their FRR configuration overwritten when using unrelated SDN
features. Since this endpoint is called *after* committing the new SDN
configuration, but handles writing the FRR configuration, we need a
way to signal this endpoint to skip writing the FRR configuration from
the `PUT /cluster/sdn` endpoint, where we can check for this case.
[1] https://pve.proxmox.com/mediawiki/index.php?title=Full_Mesh_Network_for_Ceph_Server&oldid=12146
Co-authored-by: Stefan Hanreich <s.hanreich@proxmox.com>
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Link: https://lore.proxmox.com/20250716130837.585796-58-g.goller@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This exposes the new feature that allows VMs to create snapshots
without LVM interfering, but rather let the storage handle it
directly, like QCOW2 on LVM.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
As it's unmaintained upstream and will be dropped from QEMU soon.
See commit 7669a99 ("drop support for using GlusterFS directly") in
the pve-storage git repository for more details.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Mainly add a trailing \n to die invocations to avoid getting a ugly
"at line XY" in the output.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Note that we do greedy matching for CLI parameters names and allow
using just a single minus too, so as of now one can stills use shorter
variants like `-i IFACE`.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Changes to /etc/network/interfaces already get automatically applied
by pvenetcommit. In order to support automatically applying all
configuration files generated by proxmox-network-interface-pinning,
add two additional service that apply the SDN and the firewall
configuration respectively.
If the network configuration gets automatically applied, it makes
sense that the SDN configuration should also get re-applied, since it
relies on the current network configuration for some features (e.g.
SNAT ouput interface, IS-IS interface, ..).
For the firewall, the configuration file that gets automatically
applied is currently only generated by
proxmox-network-interface-pinning, so anyone not using that tool
should see no effect at all.
They are split into their own one-shot services, since pvenetcommit
needs to run before the network configuration gets loaded and applied
by ifupdown2, but pvesdncommit requires the new network configuration
to be already applied in order to work properly. pvefirewallcommit
requires at least pmxcfs to be up and running, since it reads / writes
configuration files there.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Link: https://lore.proxmox.com/20250716151815.348161-9-s.hanreich@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
proxmox-network-interface-pinning is a tool for pinning network
interface names. It works by generating a link file in
/usr/local/lib/systemd/network and then updating the following files
by replacing the current name with the pinned name:
* /etc/network/interfaces
* /etc/pve/nodes/nodename/host.fw
* /etc/pve/sdn/controllers.cfg (IS-IS controllers)
In each case the tool creates a pending configuration file, that gets
applied on reboot (via pvenetcommit, pvesdncommit and
pvefirewallcommit respectively).
SDN and /e/n/i already have pending configuration files built in, so
the tool writes to them. For the host firewall we introduce a
host.fw.new file, that is currently only used by the
proxmox-network-interface-pinning tool, but could be used in the
future for creating pending configurations for the firewall stack.
There are still some places where interface names occur, where we do
not update the configuration:
* /etc/pve/firewall/cluster.fw - This is because we cannot update a
cluster-wide file with the locally-generated mappings. In this case a
warning is printed.
* In the node configuration there is a parameter for wakeonlan that
takes an interface as argument.
Otherwise all occurrences of interfaces or interface lists should be
included.
Example invocations of pveeth:
$ proxmox-network-interface-pinning generate --nic enp1s0
Generates a pinning for enp1s0 (if it doesn't exist already) and
updates the configuration file.
$ proxmox-network-interface-pinning generate
Generates a pinning for all physical interfaces, that do not yet have
one.
After rebooting, all pending changes made by the tool should get
automatically applied via the respective systemd one-shot services
(see the following commit).
Currently there is only support for a fixed prefix: 'nic'. This is
because we rely on PHYISCAL_NIC_RE for detecting physical network
interfaces across several places in our codebase. For now, nic has
been added as a valid prefix for NICs in pve-common, so that prefix is
used here.
In order to support custom prefixes, every place in the code relying
on PHYISCAL_NIC_RE (at least) would have to be reworked.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Link: https://lore.proxmox.com/20250716151815.348161-8-s.hanreich@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Before the separate proxmox-mail-forward helper was split out into its
own package, pve-manager shipped its own pvemailforward script. Arguably
this is the reason why the code for migrating the path to the helper in
/root/.forward is part of pve-manager's d/postinst script, not
proxmox-mail-forward's.
The pvemailforward -> proxmox-mail-forward migration happened in
pve-manager 7.2-12. Since we don't support skipping major versions (e.g.
7 -> 9), we should be able to drop the migration code for PVE 9.
This also means that proxmox-mail-forward is now fully in charge of the
contents of /root/.forward.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Link: https://lore.proxmox.com/20250716073759.40752-2-l.wagner@proxmox.com
The 'protected' checkbox in the left column seems to use slightly less
space than the 'mailto' field in the same row in the right column.
The 'mailto' field is only shown when the 'notification-mode' is set to
legacy-sendmail. Due to the differences in size, the UI shifts by a
couple pixels when the additional field is shown.
As a workaround, the checkbox is padded on the bottom by a tiny amount,
stopping the UI from shifting around when the additional field is
shown.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250624120032.153539-3-l.wagner@proxmox.com
The 'auto' mode does not really add any functionality but only adds
confusion about what it actually does, so it is completely removed from
the UI. It is still supported by the backend, but in the UI it is mapped
it to a concrete mode (either notification-system or legacy-sendmail,
depending on whether mailto is set).
The term 'Notification System' is completely dropped from the UI,
instead 'Global Notification Settings' is used.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Michael Köppl <m.koeppl@proxmox.com>
Reviewed-by: Michael Köppl <m.koeppl@proxmox.com>
Link: https://lore.proxmox.com/20250624120032.153539-2-l.wagner@proxmox.com
The notification settings in the 'General' tab were unfortunately a
source of regular confusion for many people. This was primarily due to
the behavior of the 'notification mode'. The notification mode can
one of the following:
- notification-system: Emit a notification event to the global
notification system, where it can be matched on by notification
matchers and then sent to one or more targets.
- legacy-sendmail: Old-style notifications, where one can directly
enter some email address. The system uses 'sendmail' to
send the notification to the specified address, circumventing
the regular notification stack.
- auto: Use legacy-sendmail if an email is entered and the
notification system if not
The 'auto' mode was originally intended to ease migration between the
old and the new system. From a user's perspective however, 'auto' is
quite surprising and unintuitive. The UI preselected 'auto' as a
default, which would, as explained above, favor the new notification
stack with the 'mailto' field empty. However, the UI would still invite
the user to enter their email address, which would then entail the
'legacy-sendmail' mode. Some users were led to believe that this email
address would then be used for a configured email target of the new
notification stack. As a consequence, 'auto' is now completely hidden in
the UI.
In the new 'Notifications' tab one can now choose between
( ) Use global notification settings
(x) Use sendmail to send an email
Recipients: [ ]
When: [Always/On Error]
'Recipients' and 'When' are disabled if the first radio box is selected.
The new tab can later also be used to house other controls. For example,
we could display all matchers that could potentially match this backup
job, or maybe even allow to create a new matcher with a pre-populated
match-field rule.
The term 'Notification System' is altogether from the UI. It is not
necessarily clear to a user that this refers to the settings in
Datacenter > Notifications.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Tested-by: Michael Köppl <m.koeppl@proxmox.com>
Reviewed-by: Michael Köppl <m.koeppl@proxmox.com>
Link: https://lore.proxmox.com/20250624120032.153539-1-l.wagner@proxmox.com
Until now, the pvestatd did broadcast the pve-manager version only
once after startup of the service. But there are some situations,
where the local pmxcfs (pve-cluster) restarts and loses that
information.
Basically every time we restarted the pmxcfs without restarting
pvestatd too.
While updates involving the pve-cluster packages normally correlate
with updates of libpve-cluster-api-perl, which cause a restart of all
perl based daemons through the "pve-api-updates" package trigger, it
still could happen in other situations, like for example, on a cluster
join, or if the pmxcfs has been restarted manually.
By additionally checking if the local kv-store of the pmxcfs has any
version info for the node, we can decide if another broadcast is
necessary.
Therefore after the next run of pvestatd, which normally happens every
10s, we should have the full version info available again.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Link: https://lore.proxmox.com/20250714082358.56826-1-a.lauterer@proxmox.com
[TL: mention packaging triggers]
By enabling the import button for qcow2/vmdk/raw files, and showing a
window with a VMID selector and the disk edit panel.
Change the edit panel so that when we give an explicit volume id
directly, we don't let the user select one. Instead it show it in a
displayfield.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Link: https://lore.proxmox.com/20250714122835.3347682-4-d.csapak@proxmox.com
by adding a new 'import' button in the disk tab, which adds the same
input panel as the one we have when doing an 'Import Hard Disk' for an
existing VM.
partially fixes#2424
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Link: https://lore.proxmox.com/20250714122835.3347682-3-d.csapak@proxmox.com
from the hardware view of a virtual machine, by adding a new 'Import
Hard Disk' option in the 'Add' menu.
It replaces the standard storage selector by a import storage selector,
the file selector for the image to be imported, and a target storage
selector.
partially fixes#2424
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Link: https://lore.proxmox.com/20250714122835.3347682-2-d.csapak@proxmox.com
if test -f "$BETA_SOURCES" && dpkg --compare-versions "$2" 'lt' '9.0.1' && dpkg --compare-versions "$2" 'gt' '9.0~~'; then
printf "\nNOTE: Remove the pve-test repository, which was added during the beta phase.\nYou can (re-)add repositories on the web UI (Node -> Repositories)\n\n"
rm -v "$BETA_SOURCES" || true
fi
if test ! -e /proxmox_install_mode && test -n "$2" && dpkg --compare-versions "$2" 'lt' '8.1.4~'; then
if test -e /etc/lvm/lvm.conf ; then
set_lvm_conf 1
fi
# TODO: remove with PVE 10
# pass FORCE as we want to ensure the filter for RBDs gets added to our existing one.
set_lvm_conf 1
else
set_lvm_conf
fi
set_lvm_conf
if test -n "$2" && dpkg --compare-versions "$2" 'lt' '8.1.11'; then
update_ceph_conf
fi
@ -225,6 +232,20 @@ case "$1" in
migrate_apt_auth_conf
fi
fi
if test -n "$2" && dpkg --compare-versions "$2" 'lt' '9.0.0~15'; then
printf '\n\nNOTE: Migrating existing RRD metrics data from nodes, storages and virtual guests to new PVE format version - this can take some time!\n\n'
echo "migration failed, see output above for errors and try to migrate existing data manually by running '/usr/libexec/proxmox/proxmox-rrd-migration-tool --migrate'"
fi
if test -n "$2" && dpkg --compare-versions "$2" 'lt' '9.0.4'; then
if test -e /etc/lvm/lvm.conf && grep "^\s*thin_check_options" /etc/lvm/lvm.conf | grep -qv -- "--clear-needs-check-flag"; then
printf '\nNOTE: Detected override for 'thin_check_options' without '--clear-needs-check-flag' option in /etc/lvm/lvm.conf\n'
printf 'Add the option to the override or thin pools with minor issues might not automatically activate anymore!\n\n'
d="M 2.6666667,1036.3621 C 1.1893373,1036.3621 0,1037.5515 0,1039.0288 l 0,10.6666 c 0,1.4774 1.1893373,2.6667 2.6666667,2.6667 l 4,0 C 11.837333,1052.3621 16,1046.7128 16,1039.6955 l 0,-0.6667 c 0,-1.4773 -1.189337,-2.6667 -2.666667,-2.6667 l -10.6666663,0 z"
d="m 4.3289754,1039.3621 c 0.1846149,0 0.3419956,0.071 0.4716623,0.2121 C 4.933546,1039.7121 5,1039.8793 5,1040.0759 l 0,3.2862 -1,0 0,-2.964 c 0,-0.024 -0.011592,-0.036 -0.034038,-0.036 l -1.931924,0 C 2.011349,1040.3621 2,1040.3741 2,1040.3981 l 0,2.964 -1,0 0,-4 z"
d="m 6.6710244,1039.3621 2.6579513,0 c 0.184775,0 0.3419957,0.071 0.471662,0.2121 C 9.933546,1039.7121 10,1039.8793 10,1040.0759 l 0,2.5724 c 0,0.1966 -0.066454,0.3655 -0.1993623,0.5069 -0.1296663,0.1379 -0.286887,0.2069 -0.471662,0.2069 l -2.6579513,0 c -0.184775,0 -0.3436164,-0.069 -0.4765247,-0.2069 C 6.0648334,1043.0138 6,1042.8449 6,1042.6483 l 0,-2.5724 c 0,-0.1966 0.064833,-0.3638 0.1944997,-0.5017 0.1329083,-0.1414 0.2917497,-0.2121 0.4765247,-0.2121 z m 2.2949386,1 -1.931926,0 C 7.011344,1040.3621 7,1040.3741 7,1040.3981 l 0,1.928 c 0,0.024 0.011347,0.036 0.034037,0.036 l 1.931926,0 c 0.02269,0 0.034037,-0.012 0.034037,-0.036 l 0,-1.928 c 0,-0.024 -0.011347,-0.036 -0.034037,-0.036 z"
d="m 15,1045.3621 -2.96596,0 c -0.02269,0 -0.03404,0.012 -0.03404,0.036 l 0,1.928 c 0,0.024 0.01135,0.036 0.03404,0.036 l 2.96596,0 0,1 -3.324113,0 c -0.188017,0 -0.348479,-0.068 -0.481388,-0.2037 C 11.064833,1048.0192 11,1047.8511 11,1047.6542 l 0,-2.5842 c 0,-0.1969 0.06483,-0.3633 0.194499,-0.4991 0.132909,-0.1392 0.293371,-0.2088 0.481388,-0.2088 l 3.324113,0 z"
sodipodi:nodetypes="cssssccscsscscc" />
</g>
</g>
<g
id="g4356">
<g
id="g4347">
<path
sodipodi:nodetypes="scsccsssscccs"
d="m 4.3289754,1039.3621 c 0.1846149,0 0.3419956,0.071 0.4716623,0.2121 C 4.933546,1039.7121 5,1039.8793 5,1040.0759 l 0,3.2862 -1,0 0,-2.964 c 0,-0.024 -0.011592,-0.036 -0.034038,-0.036 l -1.931924,0 c -0.022689,0 -0.034038,0.012 -0.034038,0.036 l 0,2.964 -1,0 0,-4 z"
d="m 6.6710244,1039.3621 2.6579513,0 c 0.184775,0 0.3419957,0.071 0.471662,0.2121 C 9.933546,1039.7121 10,1039.8793 10,1040.0759 l 0,2.5724 c 0,0.1966 -0.066454,0.3655 -0.1993623,0.5069 -0.1296663,0.1379 -0.286887,0.2069 -0.471662,0.2069 l -2.6579513,0 c -0.184775,0 -0.3436164,-0.069 -0.4765247,-0.2069 C 6.0648334,1043.0138 6,1042.8449 6,1042.6483 l 0,-2.5724 c 0,-0.1966 0.064833,-0.3638 0.1944997,-0.5017 0.1329083,-0.1414 0.2917497,-0.2121 0.4765247,-0.2121 z m 2.2949386,1 -1.931926,0 C 7.011344,1040.3621 7,1040.3741 7,1040.3981 l 0,1.928 c 0,0.024 0.011347,0.036 0.034037,0.036 l 1.931926,0 c 0.02269,0 0.034037,-0.012 0.034037,-0.036 l 0,-1.928 c 0,-0.024 -0.011347,-0.036 -0.034037,-0.036 z"
d="m 15,1045.3621 -2.96596,0 c -0.02269,0 -0.03404,0.012 -0.03404,0.036 l 0,1.928 c 0,0.024 0.01135,0.036 0.03404,0.036 l 2.96596,0 0,1 -3.324113,0 c -0.188017,0 -0.348479,-0.068 -0.481388,-0.2037 C 11.064833,1048.0192 11,1047.8511 11,1047.6542 l 0,-2.5842 c 0,-0.1969 0.06483,-0.3633 0.194499,-0.4991 0.132909,-0.1392 0.293371,-0.2088 0.481388,-0.2088 l 3.324113,0 z"