Skip to content
GitLab
Menu
Projects
Groups
Snippets
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Sign in
Toggle navigation
Menu
Open sidebar
Projets publics
Ravada-Mirror
Commits
ff2cb6c6
Commit
ff2cb6c6
authored
Nov 25, 2020
by
Francesc Guasch
Browse files
Merge branch 'develop' of
https://github.com/upc/ravada
into develop
parents
9bf377d3
cb6581c7
Changes
57
Expand all
Hide whitespace changes
Inline
Side-by-side
CHANGELOG.md
View file @
ff2cb6c6
# Change Log
**Implemented enhancements:**
-
Improve set date #1279
-
Slow response on admin machines page #1276
-
Launch virtual machine in same node if possible #1274
-
Slow start when nodes down #1265
-
Cope with base file missing on node #1264
-
Polished action buttons in main screen [
\#
1408]
-
Polished shutdown clones buttons [
\#
1406]
-
Add debug option [
\#
1402]
-
Loading icon [
\#
1401]
-
Feature migrate machine [
\#
1393]
**Bugfixes**
-
LDAP access settings broken #1277
-
Wrong display ip on nodes #1262
-
Do not use forced display ip on nodes #1261
-
Ubuntu 20.04 MD5SUM file gone [
\#
1420]
-
Fix docker files config and dependencies [
\#
1411]
-
Fix docker files for tzdata package compatibility [
\#
1376]
-
Fix sync clones back [
\#
1403]
-
Fix nodes nat [
\#
1400]
README.md
View file @
ff2cb6c6
# ravada
[

](https://github.com/UPC/ravada/releases)
[

](https://github.com/UPC/ravada/blob/master/LICENSE)
[

](https://github.com/UPC/ravada/releases)
[

](https://github.com/UPC/ravada/blob/master/LICENSE)
[

](http://ravada.readthedocs.io/en/latest/?badge=latest)
[

](https://twitter.com/ravada_vdi)
[

](https://t.me/ravadavdi)
[

](http://www.repostatus.org/#active)
[

](https://hosted.weblate.org/engage/ravada/)
[

](https://conventionalcommits.org)
<sup>
**Frontend:**
</sup>
<!-- [](https://hub.docker.com/r/ravada/front/) -->
...
...
SECURITY.md
0 → 100644
View file @
ff2cb6c6
# RavadaVDI Security
We take security very seriously. We welcome any peer review of our 100% open source code to ensure nobody's Ravada is ever compromised or hacked.
## Reporting a Vulnerability
So, you think you found a vulnerability? Well, please let us know!
Please open up an
[
issue
][
1
]
and try to provide as much information as possible.
[
1
]:
https://github.com/UPC/ravada/issues/new?assignees=&labels=&template=bug_report.md&title=
etc/rvd_front.conf.example
View file @
ff2cb6c6
...
...
@@ -37,4 +37,7 @@
,file => '/var/log/ravada/rvd_front.log'
,level => 'debug'
}
# Insert widget in /js/custom/insert_here_widget.js
# this widget embed js in templates/bootstrap/scripts.html.ep
,widget => ''
};
lib/Ravada.pm
View file @
ff2cb6c6
...
...
@@ -3,9 +3,9 @@ package Ravada;
use
warnings
;
use
strict
;
our
$VERSION
=
'
0.1
0
.0
';
our
$VERSION
=
'
0.1
1
.0
';
use
Carp
qw(carp croak)
;
use
Carp
qw(carp croak
cluck
)
;
use
Data::
Dumper
;
use
DBIx::
Connector
;
use
File::
Copy
;
...
...
@@ -83,6 +83,7 @@ $FILE_CONFIG = undef if ! -e $FILE_CONFIG;
our
$CONNECTOR
;
our
$CONFIG
=
{};
our
$FORCE_DEBUG
=
0
;
our
$DEBUG
;
our
$VERBOSE
;
our
$CAN_FORK
=
1
;
...
...
@@ -215,8 +216,8 @@ sub _update_isos {
,
arch
=>
'
amd64
'
,
xml
=>
'
focal_fossa-amd64.xml
'
,
xml_volume
=>
'
focal_fossa64-volume.xml
'
,
url
=>
'
http://cdimage.ubuntu.com/ubuntu-mate/releases/20.04/release/ubuntu-mate-20.04-desktop-amd64.iso
'
,
md5
_url
=>
'
$url/
MD5
SUMS
'
,
url
=>
'
http://cdimage.ubuntu.com/ubuntu-mate/releases/20.04
.*
/release/ubuntu-mate-20.04
.*
-desktop-amd64.iso
'
,
sha256
_url
=>
'
$url/
SHA256
SUMS
'
},
mate_bionic
=>
{
name
=>
'
Ubuntu Mate Bionic 64 bits
'
...
...
@@ -225,7 +226,7 @@ sub _update_isos {
,
xml
=>
'
bionic-amd64.xml
'
,
xml_volume
=>
'
bionic64-volume.xml
'
,
url
=>
'
http://cdimage.ubuntu.com/ubuntu-mate/releases/18.04.*/release/ubuntu-mate-18.04.*-desktop-amd64.iso
'
,
md5
_url
=>
'
$url/
MD5
SUMS
'
,
sha256
_url
=>
'
$url/
SHA256
SUMS
'
},
mate_bionic_i386
=>
{
name
=>
'
Ubuntu Mate Bionic 32 bits
'
...
...
@@ -234,7 +235,7 @@ sub _update_isos {
,
xml
=>
'
bionic-i386.xml
'
,
xml_volume
=>
'
bionic32-volume.xml
'
,
url
=>
'
http://cdimage.ubuntu.com/ubuntu-mate/releases/18.04.*/release/ubuntu-mate-18.04.*-desktop-i386.iso
'
,
md5
_url
=>
'
$url/
MD5
SUMS
'
,
sha256
_url
=>
'
$url/
SHA256
SUMS
'
},
mate_xenial
=>
{
name
=>
'
Ubuntu Mate Xenial
'
...
...
@@ -243,7 +244,7 @@ sub _update_isos {
,
xml
=>
'
yakkety64-amd64.xml
'
,
xml_volume
=>
'
yakkety64-volume.xml
'
,
url
=>
'
http://cdimage.ubuntu.com/ubuntu-mate/releases/16.04.*/release/ubuntu-mate-16.04.*-desktop-amd64.iso
'
,
md5
_url
=>
'
$url/
MD5
SUMS
'
,
sha256
_url
=>
'
$url/
SHA256
SUMS
'
,
min_disk_size
=>
'
10
'
},
,
focal_fossa
=>
{
...
...
@@ -252,9 +253,9 @@ sub _update_isos {
,
arch
=>
'
amd64
'
,
xml
=>
'
focal_fossa-amd64.xml
'
,
xml_volume
=>
'
focal_fossa64-volume.xml
'
,
url
=>
'
http://releases.ubuntu.com/20.04
/
'
,
file_re
=>
'
^ubuntu-20.04.
*
desktop-amd64.iso
'
,
md5
_url
=>
'
$url/
MD5
SUMS
'
,
url
=>
'
http://releases.ubuntu.com/20.04
'
,
file_re
=>
'
^ubuntu-20.04.
1-
desktop-amd64.iso
'
,
sha256
_url
=>
'
$url/
SHA256
SUMS
'
,
min_disk_size
=>
'
9
'
}
...
...
@@ -266,7 +267,7 @@ sub _update_isos {
,
xml_volume
=>
'
bionic64-volume.xml
'
,
url
=>
'
http://releases.ubuntu.com/18.04/
'
,
file_re
=>
'
^ubuntu-18.04.*desktop-amd64.iso
'
,
md5
_url
=>
'
$url/
MD5
SUMS
'
,
sha256
_url
=>
'
$url/
SHA256
SUMS
'
,
min_disk_size
=>
'
9
'
}
...
...
@@ -332,9 +333,9 @@ sub _update_isos {
,
arch
=>
'
amd64
'
,
xml
=>
'
focal_fossa-amd64.xml
'
,
xml_volume
=>
'
focal_fossa64-volume.xml
'
,
md5
_url
=>
'
$url/
MD5
SUMS
'
,
url
=>
'
http://cdimage.ubuntu.com/kubuntu/releases/20.04/release/
'
,
file_re
=>
'
kubuntu-20.04-desktop-amd64.iso
'
,
sha256
_url
=>
'
$url/
SHA256
SUMS
'
,
url
=>
'
http://cdimage.ubuntu.com/kubuntu/releases/20.04
.*
/release/
'
,
file_re
=>
'
kubuntu-20.04
.*
-desktop-amd64.iso
'
,
rename_file
=>
'
kubuntu_focal_fossa_64.iso
'
}
,
kubuntu_64
=>
{
...
...
@@ -343,9 +344,9 @@ sub _update_isos {
,
arch
=>
'
amd64
'
,
xml
=>
'
bionic-amd64.xml
'
,
xml_volume
=>
'
bionic64-volume.xml
'
,
md5
_url
=>
'
$url/
MD5
SUMS
'
,
sha256
_url
=>
'
$url/
SHA256
SUMS
'
,
url
=>
'
http://cdimage.ubuntu.com/kubuntu/releases/18.04/release/
'
,
file_re
=>
'
kubuntu-18.04-desktop-amd64.iso
'
,
file_re
=>
'
kubuntu-18.04
.\d+
-desktop-amd64.iso
'
,
rename_file
=>
'
kubuntu_bionic_64.iso
'
}
,
kubuntu_32
=>
{
...
...
@@ -354,9 +355,9 @@ sub _update_isos {
,
arch
=>
'
i386
'
,
xml
=>
'
bionic-i386.xml
'
,
xml_volume
=>
'
bionic32-volume.xml
'
,
md5
_url
=>
'
$url/
MD5
SUMS
'
,
sha256
_url
=>
'
$url/
SHA256
SUMS
'
,
url
=>
'
http://cdimage.ubuntu.com/kubuntu/releases/18.04/release/
'
,
file_re
=>
'
kubuntu-18.04-desktop-i386.iso
'
,
file_re
=>
'
kubuntu-18.04
.\d+
-desktop-i386.iso
'
,
rename_file
=>
'
kubuntu_bionic_32.iso
'
}
,
suse_15
=>
{
...
...
@@ -376,7 +377,7 @@ sub _update_isos {
,
arch
=>
'
amd64
'
,
xml
=>
'
bionic-amd64.xml
'
,
xml_volume
=>
'
bionic64-volume.xml
'
,
md5
_url
=>
'
$url/../
MD5
SUMS
'
,
sha256
_url
=>
'
$url/../
SHA256
SUMS
'
,
url
=>
'
http://archive.ubuntu.com/ubuntu/dists/bionic/main/installer-amd64/current/images/netboot/
'
,
file_re
=>
'
mini.iso
'
,
rename_file
=>
'
xubuntu_bionic_64.iso
'
...
...
@@ -406,7 +407,7 @@ sub _update_isos {
name
=>
'
Lubuntu Bionic Beaver 64 bits
'
,
description
=>
'
Lubuntu 18.04 Bionic Beaver 64 bits
'
,
url
=>
'
http://cdimage.ubuntu.com/lubuntu/releases/18.04.*/release/lubuntu-18.04.*-desktop-amd64.iso
'
,
md5
_url
=>
'
$url/
MD5
SUMS
'
,
sha256
_url
=>
'
$url/
SHA256
SUMS
'
,
xml
=>
'
bionic-amd64.xml
'
,
xml_volume
=>
'
bionic64-volume.xml
'
}
...
...
@@ -415,7 +416,7 @@ sub _update_isos {
,
description
=>
'
Lubuntu 18.04 Bionic Beaver 32 bits
'
,
arch
=>
'
i386
'
,
url
=>
'
http://cdimage.ubuntu.com/lubuntu/releases/18.04.*/release/lubuntu-18.04.*-desktop-i386.iso
'
,
md5
_url
=>
'
$url/
MD5
SUMS
'
,
sha256
_url
=>
'
$url/
SHA256
SUMS
'
,
xml
=>
'
bionic-i386.xml
'
,
xml_volume
=>
'
bionic32-volume.xml
'
}
...
...
@@ -572,6 +573,10 @@ sub _update_isos {
}
sub
_scheduled_fedora_releases
($self,$data) {
return
if
!
exists
$VALID_VM
{
KVM
}
||!
$VALID_VM
{
KVM
};
my
$vm
=
$self
->
search_vm
('
KVM
')
or
return
;
# TODO move ISO downloads off KVM
my
@now
=
localtime
(
time
);
my
$year
=
$now
[
5
]
+
1900
;
my
$month
=
$now
[
4
]
+
1
;
...
...
@@ -583,6 +588,7 @@ sub _scheduled_fedora_releases($self,$data) {
=
'
http://ftp.halifax.rwth-aachen.de/fedora/linux/releases/
';
my
$release
=
27
;
for
my
$y
(
2018
..
$year
)
{
for
my
$m
(
5
,
11
)
{
return
if
$y
==
$year
&&
$m
>
$month
;
...
...
@@ -593,13 +599,25 @@ sub _scheduled_fedora_releases($self,$data) {
my
$url
=
$url_archive
;
$url
=
$url_current
if
$y
>=
$year
-
1
;
my
$url_file
=
$url
.
$release
.
'
/Workstation/x86_64/iso/Fedora-Workstation-.*-x86_64-
'
.
$release
.
'
-.*\.iso
';
my
@found
=
$vm
->
_search_url_file
(
$url_file
);
if
(
!
@found
)
{
next
if
$url
=~
m{//archives}
;
$url_file
=
$url_archive
.
$release
.
'
/Workstation/x86_64/iso/Fedora-Workstation-.*-x86_64-
'
.
$release
.
'
-.*\.iso
';
@found
=
$vm
->
_search_url_file
(
$url_file
);
next
if
!
scalar
(
@found
);
}
$data
->
{
$name
}
=
{
name
=>
'
Fedora
'
.
$release
,
description
=>
"
RedHat Fedora
$release
Workstation 64 bits
"
,
url
=>
$url
.
$release
.
'
/Workstation/x86_64/iso/Fedora-Workstation-.*-x86_64-
'
.
$release
.
'
-.*\.iso
'
,
arch
=>
'
amd64
'
,
url
=>
$url_file
,
xml
=>
'
xenial64-amd64.xml
'
,
xml_volume
=>
'
xenial64-volume.xml
'
,
sha256_url
=>
'
$url/Fedora-Workstation-
'
.
$release
.
'
-.*-x86_64-CHECKSUM
'
...
...
@@ -884,7 +902,9 @@ sub _remove_old_isos {
,"
DELETE FROM iso_images
"
.
"
WHERE name like 'Debian Buster 32%'
"
.
"
AND file_re like '%xfce-CD-1.iso'
"
,"
DELETE FROM iso_images
"
.
"
WHERE (name LIKE 'Ubuntu Focal%' OR name LIKE 'Ubuntu Bionic%' )
"
.
"
AND ( md5 IS NOT NULL OR md5_url IS NOT NULL)
"
)
{
my
$sth
=
$CONNECTOR
->
dbh
->
prepare
(
$sql
);
$sth
->
execute
();
...
...
@@ -1550,6 +1570,9 @@ sub _upgrade_tables {
}
$self
->
_upgrade_table
('
domains
','
shared_storage
','
varchar(254)
');
$self
->
_upgrade_table
('
domains
','
post_shutdown
','
int not null default 0
');
$self
->
_upgrade_table
('
domains
','
post_hibernated
','
int not null default 0
');
$self
->
_upgrade_table
('
domains
','
is_compacted
','
int not null default 0
');
$self
->
_upgrade_table
('
domains
','
has_backups
','
int not null default 0
');
$self
->
_upgrade_table
('
domains_network
','
allowed
','
int not null default 1
');
...
...
@@ -2208,6 +2231,7 @@ sub list_domains_data($self, %args ) {
my
$sth
=
$CONNECTOR
->
dbh
->
prepare
(
$query
);
$sth
->
execute
(
@values
);
while
(
my
$row
=
$sth
->
fetchrow_hashref
)
{
$row
->
{
date_changed
}
=
0
if
!
defined
$row
->
{
date_changed
};
lock_hash
(
%$row
);
push
@domains
,(
$row
);
}
...
...
@@ -3102,8 +3126,9 @@ sub _cmd_open_iptables {
sub
_cmd_clone
($self, $request) {
return
_req_clone_many
(
$self
,
$request
)
if
$request
->
defined_arg
('
number
')
&&
$request
->
defined_arg
('
number
')
>
1
;
return
_req_clone_many
(
$self
,
$request
)
if
(
$request
->
defined_arg
('
number
')
&&
$request
->
defined_arg
('
number
')
>
1
)
||
(
!
$request
->
defined_arg
('
name
')
&&
$request
->
defined_arg
('
add_to_pool
'));
my
$domain
=
Ravada::
Domain
->
open
(
$request
->
args
('
id_domain
'))
or
confess
"
Error: Domain
"
.
$request
->
args
('
id_domain
')
.
"
not found
";
...
...
@@ -3744,7 +3769,7 @@ sub _cmd_list_network_interfaces($self, $request) {
sub
_cmd_list_isos
($self, $request){
my
$vm_type
=
$request
->
args
('
vm_type
');
my
$vm
=
Ravada::
VM
->
open
(
type
=>
$vm_type
);
$vm
->
refresh_storage
();
my
@isos
=
sort
{
"
\L
$a
"
cmp
"
\L
$b
"
}
$vm
->
search_volume_path_re
(
qr(.*\.iso$)
);
...
...
@@ -3765,6 +3790,42 @@ sub _cmd_set_time($self, $request) {
die
"
$@ , retry.
\n
"
if
$@
;
}
sub
_cmd_compact
($self, $request) {
my
$id_domain
=
$request
->
args
('
id_domain
');
my
$domain
=
Ravada::
Domain
->
open
(
$id_domain
)
or
do
{
$request
->
retry
(
0
);
Ravada::
Request
->
refresh_vms
();
die
"
Error: domain
$id_domain
not found
\n
";
};
my
$uid
=
$request
->
args
('
uid
');
my
$user
=
Ravada::Auth::
SQL
->
search_by_id
(
$uid
);
die
"
Error: user
"
.
$user
->
name
.
"
not allowed to compact
"
.
$domain
->
name
unless
$user
->
is_operator
||
$uid
==
$domain
->
_data
('
id_owner
');
$domain
->
compact
(
$request
);
}
sub
_cmd_purge
($self, $request) {
my
$id_domain
=
$request
->
args
('
id_domain
');
my
$domain
=
Ravada::
Domain
->
open
(
$id_domain
)
or
do
{
$request
->
retry
(
0
);
Ravada::
Request
->
refresh_vms
();
die
"
Error: domain
$id_domain
not found
\n
";
};
my
$uid
=
$request
->
args
('
uid
');
my
$user
=
Ravada::Auth::
SQL
->
search_by_id
(
$uid
);
die
"
Error: user
"
.
$user
->
name
.
"
not allowed to compact
"
.
$domain
->
name
unless
$user
->
is_operator
||
$uid
==
$domain
->
_data
('
id_owner
');
$domain
->
purge
(
$request
);
}
sub
_migrate_base
($self, $domain, $node, $uid, $request) {
my
$base
=
Ravada::
Domain
->
open
(
$domain
->
id_base
);
return
if
$base
->
base_in_vm
(
$node
->
id
);
...
...
@@ -3883,14 +3944,18 @@ sub _refresh_active_domains($self, $request=undef) {
$self
->
_refresh_active_domain
(
$domain
,
\
%active_domain
)
if
$domain
;
}
else
{
my
@domains
;
eval
{
@domains
=
$self
->
list_domains
};
eval
{
@domains
=
$self
->
list_domains
_data
};
warn
$@
if
$@
;
for
my
$domain
(
@domains
)
{
next
if
$active_domain
{
$domain
->
id
};
next
if
$domain
->
is_hibernated
;
for
my
$domain_data
(
sort
{
$b
->
{
date_changed
}
cmp
$a
->
{
date_changed
}
}
@domains
)
{
$request
->
error
("
checking
$domain_data
->{name}
")
if
$request
;
next
if
$active_domain
{
$domain_data
->
{
id
}};
my
$domain
=
Ravada::
Domain
->
open
(
$domain_data
->
{
id
});
next
if
!
$domain
;
$self
->
_refresh_active_domain
(
$domain
,
\
%active_domain
);
$self
->
_remove_unnecessary_downs
(
$domain
)
if
!
$domain
->
is_active
;
}
$request
->
error
("
checked
"
.
scalar
(
@domains
))
if
$request
;
}
return
\
%active_domain
;
}
...
...
@@ -3931,7 +3996,8 @@ sub _refresh_disabled_nodes($self, $request = undef ) {
sub
_refresh_active_domain
($self, $domain, $active_domain) {
$domain
->
check_status
();
return
if
$domain
->
is_hibernated
();
return
$self
->
_refresh_hibernated
(
$domain
)
if
$domain
->
is_hibernated
();
my
$is_active
=
$domain
->
is_active
();
...
...
@@ -3947,6 +4013,12 @@ sub _refresh_active_domain($self, $domain, $active_domain) {
if
$domain
->
_data
('
status
')
eq
'
shutdown
'
&&
!
$domain
->
_data
('
post_shutdown
');
}
sub
_refresh_hibernated
($self, $domain) {
return
unless
$domain
->
is_hibernated
();
$domain
->
_post_hibernate
()
if
!
$domain
->
_data
('
post_hibernated
');
}
sub
_refresh_down_domains
($self, $active_domain, $active_vm) {
my
$sth
=
$CONNECTOR
->
dbh
->
prepare
(
"
SELECT id, name, id_vm FROM domains WHERE status='active'
"
...
...
@@ -3997,12 +4069,11 @@ sub _refresh_volatile_domains($self) {
$domain
->
_post_shutdown
(
user
=>
$USER_DAEMON
);
$domain
->
remove
(
$USER_DAEMON
);
}
else
{
confess
;
my
$sth
=
$CONNECTOR
->
dbh
->
prepare
(
"
DELETE FROM users where id=?
"
.
"
AND is_temporary=1
");
$sth
->
execute
(
$id_owner
);
$sth
->
finish
;
cluck
"
Warning: temporary user id=
$id_owner
should already be removed
";
my
$user
;
eval
{
$user
=
Ravada::Auth::
SQL
->
search_by_id
(
$id_owner
)
};
warn
$@
if
$@
;
$user
->
remove
()
if
$user
;
}
my
$sth_del
=
$CONNECTOR
->
dbh
->
prepare
("
DELETE FROM domains WHERE id=?
");
$sth_del
->
execute
(
$id_domain
);
...
...
@@ -4129,6 +4200,8 @@ sub _req_method {
,
remove_hardware
=>
\
&_cmd_remove_hardware
,
change_hardware
=>
\
&_cmd_change_hardware
,
set_time
=>
\
&_cmd_set_time
,
compact
=>
\
&_cmd_compact
,
purge
=>
\
&_cmd_purge
# Domain ports
,
expose
=>
\
&_cmd_expose
...
...
@@ -4335,15 +4408,14 @@ sub _clean_temporary_users($self) {
.
"
WHERE u.is_temporary = 1 AND u.date_created < ?
"
);
my
$sth_del
=
$CONNECTOR
->
dbh
->
prepare
(
"
DELETE FROM users
"
.
"
WHERE is_temporary = 1 AND id=?
"
);
my
$one_day
=
_date_now
(
-
24
*
60
*
60
);
$sth_users
->
execute
(
$one_day
);
while
(
my
(
$id_user
,
$id_domain
,
$date_created
)
=
$sth_users
->
fetchrow
)
{
next
if
$id_domain
;
$sth_del
->
execute
(
$id_user
);
my
$user
;
eval
{
$user
=
Ravada::Auth::
SQL
->
search_by_id
(
$id_user
)
};
warn
$@
if
$@
;
$user
->
remove
()
if
$user
;
}
}
...
...
@@ -4366,10 +4438,10 @@ sub _clean_volatile_machines($self, %args) {
eval
{
$domain_real
->
remove
(
$USER_DAEMON
)
};
warn
$@
if
$@
;
}
elsif
(
$domain
->
{
id_owner
})
{
my
$
sth
=
$CONNECTOR
->
dbh
->
prepare
(
"
DELETE FROM users where id=?
"
.
"
AND is_temporary=1
")
;
$
sth
->
execute
(
$domain
->
{
id_owner
})
;
my
$
user
;
eval
{
$user
=
Ravada::Auth::
SQL
->
search_by_id
(
$domain
->
{
id_owner
})};
warn
$@
if
$@
;
$
user
->
remove
()
if
$user
;
}
$sth_remove
->
execute
(
$domain
->
{
id
});
...
...
@@ -4423,10 +4495,9 @@ Sets debug global variable from setting
=cut
sub
set_debug_value
($self) {
$DEBUG
=
$self
->
setting
('
backend/debug
');
$DEBUG
=
$FORCE_DEBUG
||
$self
->
setting
('
backend/debug
');
}
=head2 setting
Returns the value of a configuration setting
...
...
@@ -4478,3 +4549,4 @@ Sys::Virt
=cut
1
;
lib/Ravada/Auth.pm
View file @
ff2cb6c6
...
...
@@ -21,10 +21,12 @@ Initializes the submodules
sub
init
{
my
(
$config
,
$db_con
)
=
@_
;
if
(
$config
->
{
ldap
})
{
if
(
$config
->
{
ldap
}
&&
(
!
defined
$LDAP_OK
||
$LDAP_OK
)
)
{
eval
{
$LDAP_OK
=
0
;
require
Ravada::Auth::
LDAP
;
Ravada::Auth::LDAP::
init
(
$config
);
Ravada::Auth::LDAP::
_connect_ldap
();
$LDAP_OK
=
1
;
};
warn
$@
if
$@
;
...
...
lib/Ravada/Auth/SQL.pm
View file @
ff2cb6c6
...
...
@@ -584,8 +584,10 @@ Removes the user
=cut
sub
remove
($self) {
confess
if
$self
->
name
eq
'
f
';
my
$sth
=
$$CON
->
dbh
->
prepare
("
DELETE FROM users where id=?
");
my
$sth
=
$$CON
->
dbh
->
prepare
("
DELETE FROM grants_user where id_user=?
");
$sth
->
execute
(
$self
->
id
);
$sth
=
$$CON
->
dbh
->
prepare
("
DELETE FROM users where id=?
");
$sth
->
execute
(
$self
->
id
);
$sth
->
finish
;
}
...
...
lib/Ravada/Domain.pm
View file @
ff2cb6c6
...
...
@@ -241,6 +241,10 @@ sub _check_clean_shutdown($self) {
||
$self
->
_active_iptables
(
id_domain
=>
$self
->
id
))
{
$self
->
_post_shutdown
();
}
if
(
$self
->
_data
('
status
')
eq
'
hibernated
'
&&
!
$self
->
_data
('
post_hibernated
'))
{
$self
->
_post_hibernate
();
}
}
sub
_set_last_vm
($self,$force=0) {
...
...
@@ -299,7 +303,10 @@ sub _vm_disconnect {
sub
_around_start
($orig, $self, @arg) {
$self
->
_post_hibernate
()
if
$self
->
is_hibernated
&&
!
$self
->
_data
('
post_hibernated
');
$self
->
_data
(
'
post_shutdown
'
=>
0
);
$self
->
_data
(
'
post_hibernated
'
=>
0
);
$self
->
_start_preconditions
(
@arg
);
my
%arg
;
...
...
@@ -820,7 +827,7 @@ sub _pre_prepare_base($self, $user, $request = undef ) {
# TODO: if disk is not base and disks have not been modified, do not generate them
# again, just re-attach them
# again, just re-attach them
# $self->_check_disk_modified(
confess
"
ERROR: domain
"
.
$self
->
name
.
"
is already a base
"
if
$self
->
is_base
();
$self
->
_check_has_clones
();
...
...
@@ -987,7 +994,7 @@ sub _check_cpu_usage($self, $request=undef){
chomp
(
my
$cpu_count
=
`
grep -c -P '^processor
\\
s+:' /proc/cpuinfo
`);
die
"
Error: Too many active domains.
"
if
(
scalar
$self
->
_vm
->
vm
->
list_domains
()
>=
$self
->
_vm
->
active_limit
);
}
my
@cpu
;
my
$msg
;
for
(
1
..
10
)
{
...
...
@@ -1213,7 +1220,7 @@ sub _data($self, $field, $value=undef, $table='domains') {
$self
->
{
$data
}
=
$self
->
_select_domain_db
(
_table
=>
$table
,
@field_select
);
confess
"
No DB info for domain
@field_select
in
$table
"
.
$self
->
name
confess
"
No DB info for domain
@field_select
in
$table
"
.
$self
->
name
if
!
exists
$self
->
{
$data
};
confess
"
No field
$field
in
$data
"
.
Dumper
(
\
@field_select
)
.
"
\n
"
.
Dumper
(
$self
->
{
$data
})
if
!
exists
$self
->
{
$data
}
->
{
$field
};
...
...
@@ -1579,7 +1586,7 @@ sub info($self, $user) {
,
volatile_clones
=>
$self
->
volatile_clones
,
id_vm
=>
$self
->
_data
('
id_vm
')
};
for
(
qw(comment screenshot id_owner shutdown_disconnected)
)
{
for
(
qw(comment screenshot id_owner shutdown_disconnected
is_compacted has_backups
)
)
{
$info
->
{
$_
}
=
$self
->
_data
(
$_
);
}
if
(
$is_active
)
{
...
...
@@ -1623,6 +1630,8 @@ sub info($self, $user) {
$info
->
{
cdrom
}
=
\
@cdrom
;
$info
->
{
requests
}
=
$self
->
list_requests
();
Ravada::Front::
init_available_actions
(
$user
,
$info
);
return
$info
;
}
...
...
@@ -2221,7 +2230,7 @@ sub _pre_remove_base {
my
(
$domain
)
=
@_
;
_allow_manage
(
@
_
);
_check_has_clones
(
@
_
);
if
(
!
$domain
->
is_local
)
{
my
$vm_local
=
$domain
->
_vm
->
new
(
host
=>
'
localhost
'
);
confess
"
Error: I can't find local virtual manager
"
.
$domain
->
type
...
...
@@ -2389,6 +2398,14 @@ sub _copy_clone($self, %args) {
,
from_pool
=>
0
,
@copy_arg
);
_copy_volumes
(
$self
,
$copy
);
_copy_ports
(
$self
,
$copy
);
$copy
->
is_pool
(
1
)
if
$add_to_pool
;
return
$copy
;
}
sub
_copy_volumes
($self, $copy) {
my
@volumes
=
$self
->
list_volumes_info
(
device
=>
'
disk
');
my
@copy_volumes
=
$copy
->
list_volumes_info
(
device
=>
'
disk
');
...
...
@@ -2398,8 +2415,21 @@ sub _copy_clone($self, %args) {
copy
(
$volumes
{
$target
},
$copy_volumes
{
$target
})
or
die
"
$!
$volumes
{
$target
},
$copy_volumes
{
$target
}
"
}
$copy
->
is_pool
(
1
)
if
$add_to_pool
;
return
$copy
;
}
sub
_copy_ports
($base, $copy) {
my
%port_already
;
for
my
$port
(
$copy
->
list_ports
)
{
$port_already
{
$port
->
{
internal_port
}}
++
;
}
for
my
$port
(
$base
->
list_ports
)
{
my
%port
=
%$port
;
next
if
$port_already
{
$port
->
{
internal_port
}};
delete
@port
{'
id
','
id_domain
','
public_port
'};
$copy
->
expose
(
%port
);
}
}
sub
_post_pause
{
...
...
@@ -2410,8 +2440,9 @@ sub _post_pause {
$self
->
_remove_iptables
();
}
sub
_post_hibernate
($self, $user) {
sub
_post_hibernate
($self, $user
=undef
) {
$self
->
_data
(
status
=>
'
hibernated
');
$self
->
_data
(
post_hibernated
=>
1
);
$self
->
_remove_iptables
();
$self
->
_close_exposed_port
();
}
...
...
@@ -2495,7 +2526,7 @@ sub _post_shutdown {
id_domain
=>
$self
->
id
,
id_vm
=>
$self
->
_vm
->
id
,
uid
=>
$arg
{
user
}
->
id
,
at
=>
time
+
$timeout
,
at
=>
time
+
$timeout
);
}
if
(
$self
->
is_volatile
)
{
...
...
@@ -2787,6 +2818,13 @@ sub _set_public_port($self, $id_port, $internal_port, $name, $restricted) {
}
}
sub
_used_ports_iptables
($self, $port) {