Compare commits

...

10 commits

Author SHA1 Message Date
Thomas Lamprecht
98188b1ae8 migrations: lvm autoactivation: setup CLI environment
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-11 13:52:49 +02:00
Mira Limbeck
dab5bbb371 pve8to9: fix log code ref
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
Link: https://lore.proxmox.com/20250711101607.106298-1-m.limbeck@proxmox.com
2025-07-11 13:47:08 +02:00
Lukas Wagner
0640648bcd ui: backup job details: show notification-mode instead of legacy keys
The backup job details view was never updated after the overhaul of the
notification system. In this commit we remove the left-over
notification-policy/target handling and change the view so that we
display the current configuration based on notification-mode, mailto and
mailnotification.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Link: https://lore.proxmox.com/20250709081432.91868-3-l.wagner@proxmox.com
2025-07-11 11:31:50 +02:00
Lukas Wagner
df147ed0ff ui: remove handling of obsolete notification-policy/target settings
These were only used in the 'old' revamped notification stack which was
briefly available on pvetest. With PVE 9 we can finally get completely
rid of these.

Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Michael Köppl <m.koeppl@proxmox.com>
Tested-by: Michael Köppl <m.koeppl@proxmox.com>
Link: https://lore.proxmox.com/20250709081432.91868-2-l.wagner@proxmox.com
2025-07-11 11:31:37 +02:00
Thomas Lamprecht
68cd0f25db 8to9 checks: fix minimum required version for proxmox-ve meta package
As we use native versioning for that package we need to use 0 as
minimum version, i.e. the first bump for 8.4 used the version 8.4.0
whereas for 7.4 we used 7.4-1 as debian revisions (part after the
minus) must start at 1.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-09 18:00:01 +02:00
Thomas Lamprecht
916741f926 8to9 checks: only warn if a shared storage has LVs with autoactivation
Otherwise just output a notice, as then it might be still nice for the
user to clean this up, but it isn't really important in the local
case.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-09 17:55:45 +02:00
Friedrich Weber
ebed71c822 pve8to9: check for LVM autoactivation and provide migration script
Starting with PVE 9, the LVM and LVM-thin plugins create new LVs with
the `--setautoactivation n` flag to fix #4997 [1]. However, this does
not affect already existing LVs of setups upgrading from PVE 8.

Hence, add a new script under /usr/share/pve-manager/migrations that
finds guest volume LVs with autoactivation enabled on enabled and
active LVM and LVM-thin storages, and disables autoactivation for each
of them. Before doing so, ask the user for confirmation for each
storage. For non-interactive usecases, the user can pass an flag to
assume confirmation. Disabling autoactivation via lvchange updates the
LVM metadata, hence, the storage lock is acquired before proceeding.
To avoid holding the lock for too long and blocking other consumers,
the lvchange calls are batched and the lock is released between two
batches. Afterwards, check for remaining guest volume LVs with
autoactivation enabled (these may have been created while the lock was
not held). If there are still such LVs left, or if other errors
occurred, suggest the user to run the command again.

Also, check for guest volume LVs with autoactivation enabled in
pve8to9, and suggest to run the migration script if necessary. The
check is done without a lock, as taking a lock does not have
advantages here.

While some users may have disabled autoactivation on the VG level to
work around #4997, consciously do not take this into account:
Disabling autoactivation for LVs too does not hurt, and avoids issues
in case the user decides to re-enable autoactivation on the VG level
in the future.

[1] https://bugzilla.proxmox.com/show_bug.cgi?id=4997

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Link: https://lore.proxmox.com/20250709141034.169726-4-f.weber@proxmox.com
2025-07-09 17:01:49 +02:00
Thomas Lamprecht
405fcbc1e1 pveceph: switch repo sources to modern deb822 format
The single-line format is deprecated with Debian Trixie.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-08 16:41:05 +02:00
Thomas Lamprecht
84b22751f2 ui: ceph: make squid the default release
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-08 16:01:16 +02:00
Thomas Lamprecht
194b22e27a ui: ceph: add version mapping for future tentacle 20 release
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2025-07-08 16:01:16 +02:00
7 changed files with 325 additions and 39 deletions

View file

@ -23,7 +23,7 @@ use PVE::NodeConfig;
use PVE::RPCEnvironment;
use PVE::Storage;
use PVE::Storage::Plugin;
use PVE::Tools qw(run_command split_list file_get_contents);
use PVE::Tools qw(run_command split_list file_get_contents trim);
use PVE::QemuConfig;
use PVE::QemuServer;
use PVE::VZDump::Common;
@ -54,7 +54,7 @@ my $older_suites = {
jessie => 1,
};
my ($min_pve_major, $min_pve_minor, $min_pve_pkgrel) = (8, 4, 1);
my ($min_pve_major, $min_pve_minor, $min_pve_pkgrel) = (8, 4, 0);
my $ceph_release2code = {
'12' => 'Luminous',
@ -1485,6 +1485,102 @@ sub check_legacy_backup_job_options {
}
}
sub query_autoactivated_lvm_guest_volumes {
my ($cfg, $storeid, $vgname) = @_;
my $cmd = [
'/sbin/lvs',
'--separator',
':',
'--noheadings',
'--unbuffered',
'--options',
"lv_name,autoactivation",
$vgname,
];
my $autoactivated_lvs;
eval {
run_command(
$cmd,
outfunc => sub {
my $line = shift;
$line = trim($line);
my ($name, $autoactivation_flag) = split(':', $line);
return if !$name;
$autoactivated_lvs->{$name} = $autoactivation_flag eq 'enabled';
},
);
};
die "could not list LVM logical volumes: $@\n" if $@;
my $vollist = PVE::Storage::volume_list($cfg, $storeid);
my $autoactivated_guest_lvs = [];
for my $volinfo (@$vollist) {
my $volname = (PVE::Storage::parse_volume_id($volinfo->{volid}))[1];
push @$autoactivated_guest_lvs, $volname if $autoactivated_lvs->{$volname};
}
return $autoactivated_guest_lvs;
}
sub check_lvm_autoactivation {
my $cfg = PVE::Storage::config();
my $storage_info = PVE::Storage::storage_info($cfg);
log_info("Check for LVM autoactivation settings on LVM and LVM-thin storages...");
my ($needs_fix, $shared_affected) = (0, 0);
for my $storeid (sort keys %$storage_info) {
my $scfg = PVE::Storage::storage_config($cfg, $storeid);
my $type = $scfg->{type};
next if $type ne 'lvm' && $type ne 'lvmthin';
my $vgname = $scfg->{vgname};
die "unexpected empty VG name (storage '$storeid')\n" if !$vgname;
my $info = $storage_info->{$storeid};
if (!$info->{enabled} || !$info->{active}) {
log_skip("storage '$storeid' ($type) is disabled or inactive");
next;
}
my $autoactivated_guest_lvs =
query_autoactivated_lvm_guest_volumes($cfg, $storeid, $vgname);
if (scalar(@$autoactivated_guest_lvs) > 0) {
log_notice("storage '$storeid' has guest volumes with auto-activation enabled");
$needs_fix = 1;
$shared_affected = 1 if $info->{shared};
} else {
log_pass("all guest volumes on storage '$storeid' have auto-activation disabled");
}
}
if ($needs_fix) {
# only warn if shared storages are affected, for local ones this is mostly cosmetic.
my $_log = $shared_affected ? \&log_warn : \&log_notice;
my $extra =
$shared_affected
? "Some affected volumes are on shared LVM storages, which has known issues (Bugzilla"
. " #4997). Disabling auto-activation for those is strongly recommended!"
: "All volumes with auto-activations reside on local storage, where this normally does"
. " not causes any issues.";
$_log->(
"Starting with PVE 9, auto-activation will be disabled for new LVM/LVM-thin guest"
. " volumes. This system has some volumes that still have auto-activation enabled. "
. "$extra\nYou can run the following command to disable auto-activation for existing"
. "LVM/LVM-thin "
. "guest volumes:" . "\n\n"
. "\t/usr/share/pve-manager/migrations/pve-lvm-disable-autoactivation"
. "\n");
}
return undef;
}
sub check_misc {
print_header("MISCELLANEOUS CHECKS");
my $ssh_config = eval { PVE::Tools::file_get_contents('/root/.ssh/config') };
@ -1596,6 +1692,7 @@ sub check_misc {
check_dkms_modules();
check_legacy_notification_sections();
check_legacy_backup_job_options();
check_lvm_autoactivation();
}
my sub colored_if {

View file

@ -180,7 +180,13 @@ __PACKAGE__->register_method({
die "unsupported ceph version: $cephver"
if !exists($available_ceph_releases->{$cephver});
my $repolist = "deb ${cdn}/debian/ceph-${cephver} bookworm $repo\n";
my $repo_source = <<"EOF";
Types: deb
URIs: ${cdn}/debian/ceph-${cephver}
Suites: trixie
Components: ${repo}
Signed-By: /usr/share/keyrings/proxmox-archive-keyring.gpg
EOF
my $rendered_release =
$available_ceph_releases->{$cephver}->{release} . ' ' . ucfirst($cephver);
@ -193,8 +199,7 @@ __PACKAGE__->register_method({
die "Aborting installation as requested\n" if !$continue;
}
# FIXME: change to deb822 sources format, use proxmox-apt via perlmod here?
PVE::Tools::file_set_contents("/etc/apt/sources.list.d/ceph.list", $repolist);
PVE::Tools::file_set_contents("/etc/apt/sources.list.d/ceph.sources", $repo_source);
if ($available_ceph_releases->{$cephver}->{unsupported}) {
if ($param->{'allow-experimental'}) {

View file

@ -30,6 +30,9 @@ HELPERS = \
pve-startall-delay \
pve-init-ceph-crash
MIGRATIONS = \
pve-lvm-disable-autoactivation
SERVICE_MANS = $(addsuffix .8, $(SERVICES))
CLI_MANS = \
@ -83,7 +86,7 @@ pvereport.1.pod: pvereport
.PHONY: tidy
tidy:
echo $(SCRIPTS) $(HELPERS) | xargs -n4 -P0 proxmox-perltidy
echo $(SCRIPTS) $(HELPERS) $(MIGRATIONS) | xargs -n4 -P0 proxmox-perltidy
.PHONY: check
check: $(addsuffix .service-api-verified, $(SERVICES)) $(addsuffix .api-verified, $(CLITOOLS))
@ -95,6 +98,8 @@ install: $(SCRIPTS) $(CLI_MANS) $(SERVICE_MANS) $(BASH_COMPLETIONS) $(ZSH_COMPLE
install -m 0755 $(SCRIPTS) $(BINDIR)
install -d $(USRSHARE)/helpers
install -m 0755 $(HELPERS) $(USRSHARE)/helpers
install -d $(USRSHARE)/migrations
install -m 0755 $(MIGRATIONS) $(USRSHARE)/migrations
install -d $(MAN1DIR)
install -m 0644 $(CLI_MANS) $(MAN1DIR)
install -d $(MAN8DIR)

View file

@ -0,0 +1,178 @@
#!/usr/bin/perl
use strict;
use warnings;
use Getopt::Long;
use Term::ANSIColor;
use PVE::CLI::pve8to9;
use PVE::RPCEnvironment;
use PVE::Tools qw(run_command);
my $is_tty = (-t STDOUT);
my $level2color = {
pass => 'green',
warn => 'yellow',
fail => 'bold red',
};
my $log_line = sub {
my ($level, $line) = @_;
my $color = $level2color->{$level} // '';
print color($color) if $is_tty && $color && $color ne '';
print uc($level), ': ' if defined($level);
print "$line\n";
print color('reset') if $is_tty;
};
sub log_pass { $log_line->('pass', @_); }
sub log_info { $log_line->('info', @_); }
sub log_warn { $log_line->('warn', @_); }
sub log_fail { $log_line->('fail', @_); }
sub main {
my $assume_yes = 0;
if (!GetOptions('assume-yes|y', \$assume_yes)) {
print "USAGE $0 [ --assume-yes | -y ]\n";
exit(-1);
}
PVE::RPCEnvironment->setup_default_cli_env();
my $cfg = PVE::Storage::config();
my $storage_info = PVE::Storage::storage_info($cfg);
my $got_error = 0;
my $skipped_storage = 0;
my $still_found_autoactivated_lvs = 0;
my $did_work = 0;
log_info("Starting with PVE 9, autoactivation will be disabled for new LVM/LVM-thin guest "
. "volumes. This script disables autoactivation for existing LVM/LVM-thin guest volumes."
);
for my $storeid (sort keys %$storage_info) {
eval {
my $scfg = PVE::Storage::storage_config($cfg, $storeid);
my $type = $scfg->{type};
return if $type ne 'lvm' && $type ne 'lvmthin';
my $info = $storage_info->{$storeid};
if (!$info->{enabled} || !$info->{active}) {
log_info("storage '$storeid' ($type) is disabled or inactive");
return;
}
my $vgname = $scfg->{vgname};
die "unexpected empty VG name (storage '$storeid')\n" if !$vgname;
my $autoactivated_lvs =
PVE::CLI::pve8to9::query_autoactivated_lvm_guest_volumes($cfg, $storeid, $vgname);
if (scalar(@$autoactivated_lvs) == 0) {
log_pass("all guest volumes on storage '$storeid' have "
. "autoactivation disabled");
return;
}
print "Disable autoactivation on "
. scalar(@$autoactivated_lvs)
. " guest volumes on storage '$storeid'? (y/N)? ";
my $continue;
if ($assume_yes) {
print "Assuming 'yes' because '--assume-yes' was passed.\n";
$continue = 1;
} elsif ($is_tty) {
my $answer = <STDIN>;
$continue = defined($answer) && $answer =~ m/^\s*y(?:es)?\s*$/i;
} else {
print "Assuming 'no'. Pass '--assume-yes' to always assume 'yes'.\n";
$continue = 0;
}
if (!$continue) {
$skipped_storage = 1;
log_warn("Skipping '$storeid' as requested ...\n");
return;
}
# in order to avoid holding the lock for too long at a time, update LVs in batches of 32
# and release the lock inbetween
my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
while (@$autoactivated_lvs) {
my @current_lvs = splice @$autoactivated_lvs, 0, 32;
$plugin->cluster_lock_storage(
$storeid,
$scfg->{shared},
undef,
sub {
for my $lvname (@current_lvs) {
log_info("disabling autoactivation for $vgname/$lvname on storage "
. "'$storeid'...");
my $cmd = [
'/sbin/lvchange', '--setautoactivation', 'n', "$vgname/$lvname",
];
eval { run_command($cmd); };
my $err = $@;
die "could not disable autoactivation for $vgname/$lvname: $err\n"
if $err;
$did_work = 1;
}
},
);
}
# new LVs might have been created in the meantime while the lock was not held
my $still_autoactivated_lvs =
PVE::CLI::pve8to9::query_autoactivated_lvm_guest_volumes($cfg, $storeid, $vgname);
if (scalar(@$still_autoactivated_lvs) == 0) {
log_pass("all guest volumes on storage '$storeid' now have "
. "autoactivation disabled");
} else {
$still_found_autoactivated_lvs = 1;
log_warn("some guest volumes on storage '$storeid' still have "
. "autoactivation enabled");
}
};
my $err = $@;
if ($err) {
$got_error = 1;
log_fail("could not disable autoactivation on enabled storage '$storeid': $err");
}
}
my $exit_code;
if ($got_error) {
$exit_code = 1;
log_fail("at least one error was encountered");
} elsif ($skipped_storage) {
$exit_code = 1;
log_warn("at least one enabled and active LVM/LVM-thin storage was skipped. "
. "Please run this script again!");
} elsif ($still_found_autoactivated_lvs) {
$exit_code = 1;
log_warn("some guest volumes still have autoactivation enabled. "
. "Please run this script again!");
} elsif ($did_work) {
$exit_code = 0;
log_pass("successfully disabled autoactivation for existing guest volumes on all "
. "enabled and active LVM/LVM-thin storages.");
} else {
$exit_code = 0;
log_pass("all existing guest volumes on enabled and active LVM/LVM-thin storages "
. "already have autoactivation disabled.");
}
exit($exit_code);
}
main();

View file

@ -105,6 +105,7 @@ Ext.define('PVE.ceph.CephHighestVersionDisplay', {
17: 'quincy',
18: 'reef',
19: 'squid',
20: 'tentacle',
};
let release = major2release[maxversion[0]] || 'unknown';
let newestVersionTxt = `${Ext.String.capitalize(release)} (${maxversiontext})`;
@ -142,7 +143,7 @@ Ext.define('PVE.ceph.CephInstallWizard', {
viewModel: {
data: {
nodename: '',
cephRelease: 'reef', // default
cephRelease: 'squid', // default
cephRepo: 'enterprise',
configuration: true,
isInstalled: false,

View file

@ -37,14 +37,6 @@ Ext.define('PVE.dc.BackupEdit', {
delete values.node;
}
// Get rid of new-old parameters for notification settings.
// These should only be set for those selected few who ran
// pve-manager from pvetest.
if (!isCreate) {
Proxmox.Utils.assemble_field_data(values, { delete: 'notification-policy' });
Proxmox.Utils.assemble_field_data(values, { delete: 'notification-target' });
}
let selMode = values.selMode;
delete values.selMode;
@ -158,14 +150,6 @@ Ext.define('PVE.dc.BackupEdit', {
let me = this;
let viewModel = me.getViewModel();
// Migrate 'new'-old notification-policy back to old-old mailnotification.
// Only should affect users who used pve-manager from pvetest. This was a remnant of
// notifications before the overhaul.
let policy = data['notification-policy'];
if (policy === 'always' || policy === 'failure') {
data.mailnotification = policy;
}
if (data.exclude) {
data.vmid = data.exclude;
data.selMode = 'exclude';

View file

@ -165,6 +165,7 @@ Ext.define('PVE.dc.BackupInfo', {
viewModel: {
data: {
retentionType: 'none',
hideRecipients: true,
},
formulas: {
hasRetention: (get) => get('retentionType') !== 'none',
@ -206,28 +207,37 @@ Ext.define('PVE.dc.BackupInfo', {
column2: [
{
xtype: 'displayfield',
name: 'notification-policy',
name: 'notification-mode',
fieldLabel: gettext('Notification'),
renderer: function (value) {
value = value ?? 'auto';
let record = this.up('pveBackupInfo')?.record;
let mailto = record?.mailto;
let mailnotification = record?.mailnotification ?? 'always';
// Fall back to old value, in case this option is not migrated yet.
let policy = value || record?.mailnotification || 'always';
let when = gettext('Always');
if (policy === 'failure') {
when = gettext('On failure only');
} else if (policy === 'never') {
when = gettext('Never');
if ((value === 'auto' && mailto === undefined) || (value === 'notification-system')) {
return gettext('Use global notification settings');
} else if (mailnotification === 'always') {
return gettext('Always send email');
} else {
return gettext('Send email on failure');
}
},
},
{
xtype: 'displayfield',
name: 'mailto',
fieldLabel: gettext('Recipients'),
hidden: true,
bind: {
hidden: '{hideRecipients}',
},
renderer: function (value) {
if (!value) {
return gettext('No recipients configured');
}
// Notification-target takes precedence
let target =
record?.['notification-target'] ||
record?.mailto ||
gettext('No target configured');
return `${when} (${target})`;
return value;
},
},
{
@ -382,6 +392,12 @@ Ext.define('PVE.dc.BackupInfo', {
vm.set('retentionType', 'none');
}
let notificationMode = values['notification-mode'] ?? 'auto';
let mailto = values.mailto;
let hideRecipients = (notificationMode === 'auto' && mailto === undefined) || (notificationMode === 'notification-system');
vm.set('hideRecipients', hideRecipients);
// selection Mode depends on the presence/absence of several keys
let selModeField = me.query('[isFormField][name=selMode]')[0];
let selMode = 'none';