Servers/Proxmox: Difference between revisions
Antonizoon (talk | contribs) No edit summary |
Antonizoon (talk | contribs) |
||
Line 34: | Line 34: | ||
https://www.ibm.com/support/knowledgecenter/linuxonibm/liabw/liabwenablenetwork.htm | https://www.ibm.com/support/knowledgecenter/linuxonibm/liabw/liabwenablenetwork.htm | ||
== | == LXC vs KVM == | ||
You'll have to make a decision on whether to use | You'll have to make a decision on whether to use LXC containers that share the host kernel, or KVM virtual machines that use their own kernel. | ||
LXC containers are most similar to FreeBSD jails, in that they use the same kernel as the host and thus have the least overhead. However, security implications appear whereby the host must be responsible for Mandatory Access Control, and you may want to trust your admin users per VM just in case. In addition, since Mandatory Access Control is handled by the host, SELinux and Apparmor cannot be enabled at the client level and the clients cannot actually customize their policies with more granularity. If there are applications that require custom kernels (such as GRSecurity), you're out of luck as well. | |||
KVM virtual machines allow the VMs to use different kernels from the host, and use Mandatory Access Control. It also isolates the host system resources from client machines for greater security than OpenVZ's shared resources provide. The tradeoff is that multiple redundant kernels are active. | KVM virtual machines allow the VMs to use different kernels from the host, and use Mandatory Access Control. It also isolates the host system resources from client machines for greater security than OpenVZ's shared resources provide. The tradeoff is that multiple redundant kernels are active and using more resources than normal,. | ||
{{Quote|Unfortunately, we have many guests where the [https://forum.proxmox.com/threads/moving-to-lxc-is-a-mistake.25603/#post-128412 overhead caused by KVM would make them unusable.] | |||
Some of our MySQL guests simply stopped working under KVM (slowed down to a crawl) under high load, while the same workload under (containers) gives us 50% higher transactions per second, and much more graceful slowdown under extreme loads.}} | |||
== Apparmor == | == Apparmor == |
Revision as of 20:37, 12 October 2017
Proxmox WebUI User Management
Creating Users
https://pve.proxmox.com/wiki/User_Management
creating with pam is easiest, but creating with pve is probably safer to seperate from your sudo password.
Make sure to set 2 factor authentication after.
IPMI
Make sure to install ipmitool
on your chosen distro.
Replace <username>
with your chosen username.
# modprobe ipmi_devintf
# ipmitool user set name 2 <username>
# ipmitool user set password 2
# ipmitool user enable 2
You can reset the BMC to factory settings as below:
https://siliconmechanics.zendesk.com/hc/en-us/articles/201143819-Resetting-the-BMC
IPMI Sideband
Use this to run IPMI over the same network ethernet interface. IPMI requires it's own IP, so choose one out of your allotment and give it to it: at least temporarily.
https://www.ibm.com/support/knowledgecenter/linuxonibm/liabw/liabwenablenetwork.htm
LXC vs KVM
You'll have to make a decision on whether to use LXC containers that share the host kernel, or KVM virtual machines that use their own kernel.
LXC containers are most similar to FreeBSD jails, in that they use the same kernel as the host and thus have the least overhead. However, security implications appear whereby the host must be responsible for Mandatory Access Control, and you may want to trust your admin users per VM just in case. In addition, since Mandatory Access Control is handled by the host, SELinux and Apparmor cannot be enabled at the client level and the clients cannot actually customize their policies with more granularity. If there are applications that require custom kernels (such as GRSecurity), you're out of luck as well.
KVM virtual machines allow the VMs to use different kernels from the host, and use Mandatory Access Control. It also isolates the host system resources from client machines for greater security than OpenVZ's shared resources provide. The tradeoff is that multiple redundant kernels are active and using more resources than normal,.
Unfortunately, we have many guests where the overhead caused by KVM would make them unusable.
Some of our MySQL guests simply stopped working under KVM (slowed down to a crawl) under high load, while the same workload under (containers) gives us 50% higher transactions per second, and much more graceful slowdown under extreme loads.
Apparmor
For OpenVZ containers, Mandatory Access Control is handled at the host level. This is why apparmor and selinux are disabled on the containers themselves.
NFS
To enable NFS, you must allow it through apparmor.
https://forum.proxmox.com/threads/advice-for-file-sharing-between-containers.25704/#post-129006