In commit 405fcbc1e ("pveceph: switch repo sources to modern deb822
format") we switched away from single-line repos, but the later
rebased commit 9c0ac59e0 ("fix #5244 pveceph: install: add new
repository for offline installation") seemingly missed that change and
was not re-tested, thus it referenced the previous variable name and
old file ending, as this was already pushed let's fix it as this
follow-up.
Fixes: 9c0ac59e0 ("fix #5244 pveceph: install: add new repository for offline installation")
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The new 'offline' repository option will not try to configure the Ceph
repositories during installation.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Tested-by: Christoph Heiss <c.heiss@proxmox.com>
Link: https://lore.proxmox.com/20250714083838.68483-2-a.lauterer@proxmox.com
by adding a 4th repository option called 'offline'. If set, the ceph
installation step will not touch the repository configuration.
We add a simple version check to make sure that the latest version
available (and to be installed) does match the selected major Ceph
version.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Tested-by: Christoph Heiss <c.heiss@proxmox.com>
Link: https://lore.proxmox.com/20250714083838.68483-1-a.lauterer@proxmox.com
The return props now include programmatically added properties guest,
jobnum, and digest (the latter only being returned in read endpoint)
in addition to the create schema.
Signed-off-by: Nicolas Frey <n.frey@proxmox.com>
Link: https://lore.proxmox.com/20251002124728.103425-3-n.frey@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The Arch property now includes all officially supported debian
Architectures [0]. These could be extended to include unofficial
ones as well, though I don't see a reason to currently do this.
The CurrentState property now includes all variants according to
the documentation of package AptPkg::Cache.
Also implemented suggestion to clone $apt_package_return_props
instead of modifying, which could've potentially resulted in
unwanted behaviour.
[0] https://wiki.debian.org/SupportedArchitectures
Signed-off-by: Nicolas Frey <n.frey@proxmox.com>
Link: https://lore.proxmox.com/20251002124728.103425-2-n.frey@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Add the missing 'Reset' action to the VM command menu. All other
actions for managing the VM run state were already present.
Also this patch does not add reset option to containers as they do not
have this option and reboot is already present for containers.
Fixes: https://bugzilla.proxmox.com/show_bug.cgi?id=4248
Signed-off-by: Amin Vakil <info@aminvakil.com>
Link: https://lore.proxmox.com/mailman.631.1754383164.367.pve-devel@lists.proxmox.com
[FE: small improvements to the commit message]
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
After looking at the code, I don't see any issues with using the
createSchema method. Additionally tested by comparing the output of:
- pvesh usage /cluster/acme/plugins --returns
- pvesh get /cluster/acme/plugins
confirmed that the contents match.
Signed-off-by: Nicolas Frey <n.frey@proxmox.com>
Link: https://lore.proxmox.com/20250924115904.122696-2-n.frey@proxmox.com
[TL: rename variable used for schema for more clarity]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
When using the vncshell or xtermjs based shells we use a SSH tunnel if
the requested node is different from the one the request was made too.
Further we special case sessions from the root@pam user, as they
already got verified, so do not need to re-login on the target shell.
With Debian Trixie based releases we had to make a change to make this
shell work again [0], it seems that there is still a race in the
whole interaction, at least if there is a nested login shell. See [1]
for more details.
The Debian Trixie version of the `login` manual page explicitly
mentions a bug related to this:
> A recursive login, as used to be possible in the good old days, no
> longer works; for most purposes su(1) is a satisfactory substitute.
> Indeed, for security reasons, login does a vhangup(2) system call
> to remove any possible listening processes on the tty. This is to
> avoid password sniffing. If one uses the command login, then the
> surrounding shell gets killed by vhangup(2) because it’s no longer
> the true owner of the tty. This can be avoided by using exec login
> in a top-level shell or xterm.
-- man 1 login
IIRC this was checked back when implanting [0], but as it was during
quite eventful bootstrapping times of Debian Trixie releases I'm not
100% certain about that.
If the issues our users sometimes (?) see stem indeed from a race
related to above bug, we should be able to avoid that by dropping the
explicit login call for root@pam when tunneling. Other @pam users
would be still affected, but as the partial fix is so simple and
correct in any way it's still worth roll it out sooner.
To improve this more broadly the following two options seem most
promising:
1. Replace the manual SSH tunnel by our proxy-to-node infrastructure
and tunnel the websocket of the target nodes termproxy command
through that.
2. Use a wrapper tool to handle the login command such that it does
not interferes with the outer login, could potentially even be an
existing one like dtach, or alternatively something we write
ourselves.
I did not yet evaluate impact and work required for either option to
happen, but from an experienced gut feeling the first option would be
the better one, especially as we want to drop SSH for tunneling
completely in the mid to long term.
As this is not a complete and certain fix for #6789 [1] and especially
as we cannot really reproduce this ourselves here in any useful way,
I refrained from adding a fix # to the commit message, but it should
be a partial fix.
[0]: https://git.proxmox.com/?p=pve-xtermjs.git;a=commitdiff;h=7b47cca8368e63c30f6227442570f9f35dd7ccf0
[1]: https://bugzilla.proxmox.com/show_bug.cgi?id=6789
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Rename the not so telling $remcmd to $tunnel_cmd and make the helper
prefix it to the actual command.
This is a preparation of adapting login for proxied (tunneled) shells,
i.e. where the user requests a shell from another node than they
opened the web UI for.
No semantic change intended.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the recent commit
d2660fc7 (ui: resource tree: improve performance on initial update)
changed how we construct the resource tree, namely outside the
treestore. While it worked fine mostly, the standard
`Ext.data.TreeModel` was used. This lead to problem with the detection
of some things, since we expected all properties that were defined on
the custom `PVETree` model.
To fix this, create an instance of `PVETree` instead.
Note that this might also fix other things that depend on the
PVETree specific properties on the datacenter root node.
Fixes: d2660fc7 (ui: resource tree: improve performance on initial update)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Michael Köppl <m.koeppl@proxmox.com>
Link: https://lore.proxmox.com/20250912113242.3139402-1-d.csapak@proxmox.com
As this is for the frontend only and the API will fail due to getting
an unknown property.
Reported-by: Alexander Zeidler <a.zeidler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
SDN entities return their name in the sdn property. Add this property
to the schema so it is shown in the documentation, as well for
generating proper types in proxmox-api-types.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Link: https://lore.proxmox.com/20250909155423.526917-2-s.hanreich@proxmox.com
Quite a bit was unused while a few others were missing and only worked
because other modules loaded them already.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This unifies it with other recent changes, as 75% often really is not
a problematic usage, while 80% isn't always problematic either, it's
still closer in practice to a load that might need to be checked out
and use sites can now override this to what makes sense most for their
semantics.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Use the same thresholds for warning and critical as we do now for the
node status overview, see commit 6df3a71bd ("ui: node status: increase
warning/critical threshold percentage for memory usage") for details.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
With a similar rationale than commit f70ea8f7c ("ui: node status:
increase warning/critical threshold percentage for memory usage"),
i.e. memory is there to be used and leveraging most of what's
configured is an OK thing to do, so no need to show a warning status
at already 75% of usage.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Memory is there to be used and unlike with block/file storage it can
be fine, even wanted to use all memory besides a bit of headroom.
As memory is easy to defragment (it's all virtual addresses anyway)
and some usage is also easy to evict (e.g. to SWAP if not really used
at the moment or as it is only an advanced cache like the ZFS ARC).
Without the override the threshold where 75% for warning and 90% for
showing a critical status. For a host with 396 GiB of installed memory
that meant that we warned already on 297 GiB used (99 GiB still
available!) and showed a critical status with ~365 GiB in use (~40 GiB
free), both can be still very OK and warranted usages though.
So increase the thresholds to 90% for warning and 97.5% for critical
usage displays, which provides a less "scare-mongering" display.
Ideally we'd support a callback to make a better decision, as clamping
on totals might be even better, but this simple change makes it
already much better. Add a comment that we might want to split out the
ARC in its own custom bar (I got a prototype around, but that needs
polishing and in-depth review).
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This avoids a "compile" check error (i.e. perl -wc) if no other module
that pulls in that dependency is loaded already.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This allows one to query the metrics stats from a specific set of
nodes. This can, e.g., help on batch querying the stats in bigger
clusters, like in a 15 node cluster one could do 3 requests at 5 nodes
each concurrently.
Doing such things can especially help on clusters with many virtual
guests, as there the time required to gather all stats quickly adds
up, more so if the time window is long, and we got a 30s overall
timeout here currently due to being proxied to the privileged API
daemon as we cannot reuse the API ticket and need to generate a new
one.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
5s is rather short, especially if one has many guests on a node and
queries for a (relatively) period, e.g. for a node with 4000 and
querying the last 10 minutes of data needs a bit over 8s on a test
cluster of mine.
So increase the timeout from 5s to 20s as a stop-gap. Note that due to
being a protected API call we are limited by the 30s proxy timeout in
total, which is something that might be revisited but out of the scope
of this change, especially as it would need to be changed in the
pve-http-server git repo anyway.
There are other ideas floating around in making this more reliable,
like making this async here (anyevent or a dedicated executable),
allowing the requester to pass a node-list to allow batching the
queries and so on.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Makes it easier to find specific guests when one got many of them.
I decided to not allow filtering the current pool name, as often there
is none set for the guest the user wants to add and one could
theoretically sort the current pool column to group guests by them,
and finally, we can still add that easily if it gets requested with a
good enough rationale.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
It was rather small, i.e. always to narrow for seeing the column
values for somewhat normal name lengths and for more than a handful of
guests the height was rather short too.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The missing `deleteEmtpy` attribute caused the fabric property to never
be removed from the EVPN controller. So when e.g. removing the fabric to
add the peers manually, there would always be an error because you can't
have peers *and* fabrics configured.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Link: https://lore.proxmox.com/20250821085015.29401-1-g.goller@proxmox.com
This will make pveceph pool ls report the 'Used' column specifying its
units:
$ pveceph pool ls --noborder
... Used
... 2.07 MiB
... 108.61 KiB
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Tested-by: Shannon Sterz <s.sterz@proxmox.com>
Reviewed-by: Shannon Sterz <s.sterz@proxmox.com>
Link: https://lore.proxmox.com/20250829094401.223667-2-m.sandoval@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This change ensures that the storage unites are displayed consistently
between the graph and the usage label right above.
When setting the unit to `bytes` the graph will now, for example, show
"130 GB" instead of "130 G", which matches the usage displayed above
and removes any ambiguity about whether "G" refers to GiB or GB.
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Tested-by: Shannon Sterz <s.sterz@proxmox.com>
Reviewed-by: Shannon Sterz <s.sterz@proxmox.com>
Link: https://lore.proxmox.com/20250829094401.223667-1-m.sandoval@proxmox.com
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This was displayed wrongly on the web UI and when calling
pvesh get /nodes/localhost/apt/changelog
One potential example of such a package was bind9-dnsutils where the
character `ř` was rendered as `Å`.
Reported-by: Lukas Wagner <l.wagner@proxmox.com>
Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
Link: https://lore.proxmox.com/20250902145345.500823-1-m.sandoval@proxmox.com
Some backslashes were not indented correctly and some used spaces
instead of tabs. Most were introduced in the recent fabrics series
(29ebe4e8d4 ("ui: fabrics: add model definitions for fabrics") and
following).
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
Link: https://lore.proxmox.com/20250905125435.231976-1-g.goller@proxmox.com
There are quite a few kebab-cased properties in the QEMU CPU config,
such as phys-bits and guest-phys-bits. These are currently not exposed
through the web interface, but only the command line.
If the QEMU CPU config is parsed, it will return undefined with an
error and will break the ProcessorEdit component so that changes cannot
be submitted anymore.
Fix that by allowing kebab-cased properties as well.
Signed-off-by: Daniel Kral <d.kral@proxmox.com>
Link: https://lore.proxmox.com/20250905142249.219371-1-d.kral@proxmox.com
[TL: drop unnecessary escape]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
when trying to detect changes of resources, we compare a list of
properties of the existing nodes in the tree with the ones we got from
the api call, so that we can update only those that changed.
One of these properties is the 'text' one, which is calculated from e.g.
the vmid and name (or the name and host, depending on the type).
Sadly, when inserting/updating the node, we modified the text property
in every case, at least adding a '<span></span>' around the existing
text. This meant that every resource was updated every time instead of
only when something changed.
To fix this, remote the 'text' property from the to checked ones, and
add all the properties that are used to compile the text one.
This reduces the time of updateTree in my test-setup (~10000 guests)
when nothing changed from ~100ms to ~15ms and reduces scroll stutter
during such an update.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Link: https://lore.proxmox.com/20250905120627.2585826-5-d.csapak@proxmox.com
When we insert nodes into the tree, we use 'insertBefore' of extjs'
NodeInterface. When the node is inside a TreeStore, it calls
'registerNode' to handle some events and accounting. Sadly it does so
not only for the inserted node, but also for the node in which is
inserted too and that calls 'registerNode' again for all of its
children.
So inserting a large number of guests under node this way has (at least)
O(n^2) calls to registerNode.
To workaround this, create the first tree node structure outside the
TreeStore and add it at the end. Further insertions are more likely to
only come in small numbers. (Still have to look into if we can avoid
that behavior there too)
This improves the time spend in 'registerNode' (in my ~10000 guests test
setup) from 4,081.6 ms to about 2.7ms.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Link: https://lore.proxmox.com/20250905120627.2585826-4-d.csapak@proxmox.com