Servers/Proxmox: Difference between revisions

From Bibliotheca Anonoma
 
(7 intermediate revisions by the same user not shown)
Line 10: Line 10:


== IPMI ==
== IPMI ==
Make sure to install {{ic|ipmitool}} on your chosen distro.


Replace {{ic|<username>}} with your chosen username.
Replace {{ic|<username>}} with your chosen username.
Line 19: Line 21:
# ipmitool user enable 2  
# ipmitool user enable 2  
|lang=bash}}
|lang=bash}}
You can reset the BMC to factory settings as below:
https://siliconmechanics.zendesk.com/hc/en-us/articles/201143819-Resetting-the-BMC


=== IPMI Sideband ===
=== IPMI Sideband ===
Line 27: Line 33:


https://www.ibm.com/support/knowledgecenter/linuxonibm/liabw/liabwenablenetwork.htm
https://www.ibm.com/support/knowledgecenter/linuxonibm/liabw/liabwenablenetwork.htm
== LXC vs KVM ==
You'll have to make a decision on whether to use LXC containers that share the host kernel, or KVM virtual machines that use their own kernel.
LXC containers are most similar to FreeBSD jails, in that they use the same kernel as the host and thus have the least overhead. However, security implications appear whereby the host must be responsible for Mandatory Access Control, and you may want to trust your admin users per VM just in case, since despite all the protections, they are still running code on your host system. In addition, since Mandatory Access Control is handled by the host, SELinux and Apparmor cannot be enabled at the client level and the clients cannot actually customize their policies with more granularity. If there are applications that require custom kernels (such as GRSecurity), you're out of luck as well.
KVM virtual machines allow the VMs to use different kernels from the host, and use Mandatory Access Control. It also isolates the host system resources from client machines for greater security than OpenVZ's shared resources provide. The tradeoff is that multiple redundant kernels are active and are using much more resources than normal, which might be unacceptable for applications with many ioops or RAM usage (since it has to go through two kernels).
{{Quote|Unfortunately, we have many guests where the [https://forum.proxmox.com/threads/moving-to-lxc-is-a-mistake.25603/#post-128412 overhead caused by KVM would make them unusable.]
Some of our MySQL guests simply stopped working under KVM (slowed down to a crawl) under high load, while the same workload under (containers) gives us 50% higher transactions per second, and much more graceful slowdown under extreme loads.}}
== Apparmor ==
For OpenVZ containers, Mandatory Access Control is handled at the host level. This is why apparmor and selinux are disabled on the containers themselves.
=== NFS ===
To enable NFS, you must allow it through apparmor.
https://forum.proxmox.com/threads/advice-for-file-sharing-between-containers.25704/#post-129006
Unfortunately, Proxmox's docs for allowing nfs mounting is outdated. They have a weird lxc fork so the official lxc docs are of no help.

Latest revision as of 20:41, 12 October 2017

Proxmox WebUI User Management[edit]

Creating Users[edit]

https://pve.proxmox.com/wiki/User_Management

creating with pam is easiest, but creating with pve is probably safer to seperate from your sudo password.

Make sure to set 2 factor authentication after.

IPMI[edit]

Make sure to install ipmitool on your chosen distro.

Replace <username> with your chosen username.

# modprobe ipmi_devintf
# ipmitool user set name 2 <username>
# ipmitool user set password 2
# ipmitool user enable 2

You can reset the BMC to factory settings as below:

https://siliconmechanics.zendesk.com/hc/en-us/articles/201143819-Resetting-the-BMC

IPMI Sideband[edit]

Tip: IPMI Sideband is useful if you are at a colocated datacenter and just want to test IPMI without having to ask a tech to attach it. But it might be a good idea to disable sideband once you can get IPMI working on it's native port, so that the host port can be disconnected when not needed.

Use this to run IPMI over the same network ethernet interface. IPMI requires it's own IP, so choose one out of your allotment and give it to it: at least temporarily.

https://www.ibm.com/support/knowledgecenter/linuxonibm/liabw/liabwenablenetwork.htm

LXC vs KVM[edit]

You'll have to make a decision on whether to use LXC containers that share the host kernel, or KVM virtual machines that use their own kernel.

LXC containers are most similar to FreeBSD jails, in that they use the same kernel as the host and thus have the least overhead. However, security implications appear whereby the host must be responsible for Mandatory Access Control, and you may want to trust your admin users per VM just in case, since despite all the protections, they are still running code on your host system. In addition, since Mandatory Access Control is handled by the host, SELinux and Apparmor cannot be enabled at the client level and the clients cannot actually customize their policies with more granularity. If there are applications that require custom kernels (such as GRSecurity), you're out of luck as well.

KVM virtual machines allow the VMs to use different kernels from the host, and use Mandatory Access Control. It also isolates the host system resources from client machines for greater security than OpenVZ's shared resources provide. The tradeoff is that multiple redundant kernels are active and are using much more resources than normal, which might be unacceptable for applications with many ioops or RAM usage (since it has to go through two kernels).

Unfortunately, we have many guests where the overhead caused by KVM would make them unusable.

Some of our MySQL guests simply stopped working under KVM (slowed down to a crawl) under high load, while the same workload under (containers) gives us 50% higher transactions per second, and much more graceful slowdown under extreme loads.

Apparmor[edit]

For OpenVZ containers, Mandatory Access Control is handled at the host level. This is why apparmor and selinux are disabled on the containers themselves.

NFS[edit]

To enable NFS, you must allow it through apparmor.

https://forum.proxmox.com/threads/advice-for-file-sharing-between-containers.25704/#post-129006

Unfortunately, Proxmox's docs for allowing nfs mounting is outdated. They have a weird lxc fork so the official lxc docs are of no help.