Ooh nice, I tried to get ceph up on my Orange Pi plus 2e which failed in the end as 32bit ARM packages aren’t available for recent releases and I couldn’t get it to compile either. 64bit ARM has packages, I’ll pick up something A53 based eventually and give it another try.
Management for my home deployment has been mostly hands-off but I was previously operations staff at a datacentre where we ran an 80 node ~500 spindle cluster so I know that it ties into Graphite nicely among other things. There are puppet and ansible modules if config management is your thing. For home I’ve got a private github repo with the keys and config stored within.
I don’t know erasure pools well, they’re comparatively new and as far as I know less flexible. I run replicated at home so adding a disk is easy enough (ceph-deploy!) and then I generally don’t have to do anything unless my placement groups per disk are a bit low (non-optimal placement, disk space utilisation) in which case I bump up the PGs per pool. Maybe in the future I need to shrink, you can’t do that to a pool but you can make a new one with a low PG count then copy your data across to it.
Oh yeah and you can make this a bit easier by using Ceph-FS on top. Support for it’s been in the linux kernel for a while now so on my other VMs (plex, torrents etc.) there’s one line in fstab and a key file. That’s it, connected to storage.