Servers/Proxmox: Difference between revisions

From Bibliotheca Anonoma
No edit summary
No edit summary
Line 33: Line 33:


https://www.ibm.com/support/knowledgecenter/linuxonibm/liabw/liabwenablenetwork.htm
https://www.ibm.com/support/knowledgecenter/linuxonibm/liabw/liabwenablenetwork.htm
== OpenVZ vs KVM ==
You'll have to make a decision on whether to use OpenVZ (LXC) containers that share the host kernel, or KVM virtual machines that use their own kernel.
OpenVZ/LXC containers are most similar to FreeBSD jails, in that they use the same kernel as the host and thus have the least overhead. However, security implications appear whereby the host must be responsible for Mandatory Access Control, and you may want to trust your admin users per VM just in case. In addition, since Mandatory Access Control is handled by the host, SELinux and Apparmor cannot be enabled at the client level and the clients cannot actually customize their policies with more granularity. If there are applications that require custom kernels (such as GRSecurity), you're out of luck as well.
KVM virtual machines allow the VMs to use different kernels from the host, and use Mandatory Access Control. It also isolates the host system resources from client machines for greater security than OpenVZ's shared resources provide. The tradeoff is that multiple redundant kernels are active.


== Apparmor ==
== Apparmor ==
Line 38: Line 46:
For OpenVZ containers, Mandatory Access Control is handled at the host level. This is why apparmor and selinux are disabled on the containers themselves.
For OpenVZ containers, Mandatory Access Control is handled at the host level. This is why apparmor and selinux are disabled on the containers themselves.


== NFS ==
=== NFS ===


To enable NFS, you must allow it through apparmor.
To enable NFS, you must allow it through apparmor.


https://forum.proxmox.com/threads/advice-for-file-sharing-between-containers.25704/#post-129006
https://forum.proxmox.com/threads/advice-for-file-sharing-between-containers.25704/#post-129006

Revision as of 20:03, 12 October 2017

Proxmox WebUI User Management

Creating Users

https://pve.proxmox.com/wiki/User_Management

creating with pam is easiest, but creating with pve is probably safer to seperate from your sudo password.

Make sure to set 2 factor authentication after.

IPMI

Make sure to install ipmitool on your chosen distro.

Replace <username> with your chosen username.

# modprobe ipmi_devintf
# ipmitool user set name 2 <username>
# ipmitool user set password 2
# ipmitool user enable 2

You can reset the BMC to factory settings as below:

https://siliconmechanics.zendesk.com/hc/en-us/articles/201143819-Resetting-the-BMC

IPMI Sideband

Tip: IPMI Sideband is useful if you are at a colocated datacenter and just want to test IPMI without having to ask a tech to attach it. But it might be a good idea to disable sideband once you can get IPMI working on it's native port, so that the host port can be disconnected when not needed.

Use this to run IPMI over the same network ethernet interface. IPMI requires it's own IP, so choose one out of your allotment and give it to it: at least temporarily.

https://www.ibm.com/support/knowledgecenter/linuxonibm/liabw/liabwenablenetwork.htm

OpenVZ vs KVM

You'll have to make a decision on whether to use OpenVZ (LXC) containers that share the host kernel, or KVM virtual machines that use their own kernel.

OpenVZ/LXC containers are most similar to FreeBSD jails, in that they use the same kernel as the host and thus have the least overhead. However, security implications appear whereby the host must be responsible for Mandatory Access Control, and you may want to trust your admin users per VM just in case. In addition, since Mandatory Access Control is handled by the host, SELinux and Apparmor cannot be enabled at the client level and the clients cannot actually customize their policies with more granularity. If there are applications that require custom kernels (such as GRSecurity), you're out of luck as well.

KVM virtual machines allow the VMs to use different kernels from the host, and use Mandatory Access Control. It also isolates the host system resources from client machines for greater security than OpenVZ's shared resources provide. The tradeoff is that multiple redundant kernels are active.

Apparmor

For OpenVZ containers, Mandatory Access Control is handled at the host level. This is why apparmor and selinux are disabled on the containers themselves.

NFS

To enable NFS, you must allow it through apparmor.

https://forum.proxmox.com/threads/advice-for-file-sharing-between-containers.25704/#post-129006