Even for a homelab these are useful features to have, I don’t want to go hooking up my monitor and keyboard every time I need to troubleshoot some boot issue or install a new OS. I can also get used DDR3 ECC memory for a hell of a lot cheaper than DDR4 right now.
Unfortunately the person who wrote this article is in the EU who has much slimmer picking in the second-hand market. I can buy a Dell R620 for $200-300, two 10-core Xeons for another $300 if it doesn’t come with enough cores, and 128GB of 16GB DDR RDIMMs for $200 - total price is under $1000USD, and that’s excluding any components that come with the server I may opt NOT to replace.
If you buy Supermicro motherboards, which come in regular form factors too, like for example X9SCM-F , or X11SSL-F , they come with IPMI (which is what the -F designates), which is exactly what iLO, DRAC, etc., is and in a normal form factor. You don't have to buy fancy jet-engine "server" boxes when you can get the same stuff in a mATX form factor. No license needed.
I have been using the X9SCM-F for the last 9 years, and enjoy being able to modify the BIOS remotely, boot remote ISO images, or troubleshoot any boot problems without physical presence.
Original parent comment was talking about installing a Ryzen consumer CPU, which is generally mutually exclusive with any board manufacturer (SMC, Dell, HP, or otherwise) that provides any sort of baseboard management controller.
Some of the SMC boards listed support i3 processors, but generally, if you want a BMC, you're stuck with enterprise-grade processors.
I listed two motherboards I personally researched in the past and knew about already. There are plenty of consumer Supermicro boards that have IPMI. Here's one I found just by browsing the first page of their workstation motherboard section. 
I'm not disputing, just seeing: this in the page from your previous comment:
Intel® 8th/9th Generation Core i9/Core i7/Corei5/Corei3/Pentium®/Celeron® series Processor, Intel® Xeon® E-2100 Processor, Intel® Xeon® E-2200 Processor.
Single Socket LGA-1151 (Socket H4) supported, CPU TDP support Up to 95W TDP
> they come with IPMI... which is exactly what iLO, DRAC, etc., is
iLO / iDRAC implement IPMI, but they are not the same thing. iLO and iDRAC provide web interfaces, remote iso mounting, granular boot control, remote vga, etc. There's a lot of useful features not available in a "standard" IPMI implementation.
> Systems compliant with IPMI version 2.0 can also communicate via serial over LAN, whereby serial console output can be remotely viewed over the LAN. Systems implementing IPMI 2.0 typically also include KVM over IP, remote virtual media and out-of-band embedded web-server interface functionality, although strictly speaking, these lie outside of the scope of the IPMI interface standard. 
They are both different levels of bad. At least supermicro is free. The whole low level bios and BMC space is stuck in some 90s parallel universe. coreboot+openbmc can't come to the masses soon enough. It would be like what linux felt like after windows.
Sure, but the licenses to unlock them are easy to acquire.
Dell ties them to the service tag, which is quite easy to set via racadm and you can buy a license off eBay. Older Dell systems required a $30 part you can find on eBay. HP just uses a key they send you on paper.
Hell, many times these decoms still have the license applied to them. I have 2xR210 II’s, a R320, R520 and formerly had a HP ML10 and a Lenovo TD340. The Lenovo was the biggest pain to get VKVM because of the stupid hard to find dongle.
Don’t buy Lenovo servers for a homelab; BTW. Dell lets you override the ramp in fan speed when a non-Dell branded PCIe card is installed, newer Lenovo servers ramp them up to 100% and you can’t turn it off (I had to leave my TD340 on the initial BIOS revision with security issues because of this).
Actually, remote display is possible with the base Dell iDrac Express license by enabling ssh console COM2 redirection. This is not enabled by default and not very obvious that it is even possible (probably because Dell wants to sell these iDrac licenses). For me the COM2 redirection is all that I need for a server in a datacenter without physical access. For example, if the server OS is not booting, I can still ssh to the idrac and change BIOS options or select alternative boot options in grub. For OS installs I use chroot. So really no need for an idrac license in my case.
I am running home servers for ~ 10 years, I never felt the need for iLO even when I moved the stuff in the basement. I have iLO on the servers at work, so I am not dismissing the utility, just the need at home.
It can happen once in awhile. I've been not-at-home the past few months and my mailserver fell over one day. I went to my apartment, turned it on, made sure mail was flowing, then went home. It had turned itself off again in the meantime. I drove out again, and turned it on, babysitting it for a few hours to see what might be happening. It's been up ever since, so, mystery. I would have liked a telephone/internet rebooter during all of this!
I'd really love to see a hypervisor that emulates IPMI/iLO/AMT/whichever flavor management hardware on machines that lack the capability. It should run a single VM only, and pass through all other hardware, but also cordon off eg. a single TCP port, and allow the user to power cycle the VM, along with remote control (could be done through VNC). Bonus points for exposing temperature and other sensors.
But when that hypervisor fails to boot you are SOL. I have vFlash cards in all my Dell servers with the OS install media and a recovery image so I can fix broken installs quickly. Oh, and that stupid PCIe training error from a NIC that went bad would have required I plug in my monitor as well.
It would be useful, but why not just run VMWare or Proxmox at that point?
At a potential performance penalty and significant extra attack surface (you now have to worry about your hypervisor’s attack surface and your actual OS’ attack surface, where as on bare metal you only have a single OS to worry about). Not to downplay the advantages of virtualisation, but if all you needed was a bare metal server then running a hypervisor adds one extra thing that can go wrong that you didn’t ask for.
You can get a Tyan Tomcat EX or one of the ASRock Rack AM4 boards that have an Aspeed AST2500 chip to provide remote management. However, it might be difficult (and expensive) to get 4 32GB UDIMMs to work properly.
I almost went down this sort of road and went 3900x instead. I think the one non-replicated benefit of the older server route is cheap & large amounts of ECC ram, but you're also getting slower ddr3 ram which is why the pricing works out the way it does. I was in that same pricing ballpark with a new-build ryzen with a 2070 super, I think you could make it pretty much equivalent but maybe trading 64gb of fast ddr4 ram for the 128gb of ddr3. As long as you can "make by" with 24 cpu threads I think you'll be in a much better position at same-ish cost but with new / full warranty components but that's just my math :-).
128GB of ECC RAM itself is going to be ~800-1000$ as far as I can find some to fit in a Ryzen build. So it won't be trivial to reach the price point for a full build although Ryzen is clearly great value anyway.
Yes but build it with 128GB of ram and try and get a lights-out kvm on it.
Much better value can be had than OPs buy - just picked up a Dell R820 (quad 8core,3.2ghz v2 Xeons, hyperthreading is overrated yo) with 768GB of ram and dual 10G nics for $1500. Idles around 170W (about $5/mo where I live). The real magic: 8 full (16x 3.0) pcie slots.
The 3900x is a beast for the workstation though, and I have it running there with nothing but praise.
Good point. If looking for server features on AM4 I highly recommend the ASRock Rack X470D4U motherboard for IPMI. Buying used HPE G8/9servers is still a great deal when you need redundant power and large amounts of RAM. Their 10g and 40g FlexibleLOM cards can be found on the cheap too. TechmikeNY is a great source of refurbished servers in the NYC area.
Not after you price the power. I have a Xeon system that is plenty powerful for my (C++) dev work, but I'm thinking of replacing it with an AMD build as it'll probably be cheaper over time. Then again, this is at Dutch energy costs, I'd imagine at (say) Texas cost it'd be different.
It seems like the author wasn’t targeting benchmarks exclusively, and comparing the price of a CPU to the price of a whole system isn’t fair. I spent about $2200 on a system that was targeting compute value- I have 4x E7-4890v2 for a total of 60 cores and can get >80000 on multicore geekbench. I also have 512GB of RAM, which I think is more than Ryzen can handle.
I paid $85 per CPU, so while a new Ryzen 9 does make sense for a lot of people with its fast single threaded speed and low power consumption, the old server gear I bought still wins in highly parallel tasks for less money.
I haven't measured power consumption with a meter, but each CPU has a TDP of 155 W, so potentially 620 W plus whatever is required to keep the motherboard, RAM, and disks alive. My ballpark guess is 750 W at full load and under 200 W at idle. It runs off a 1600 W power supply, but I suspect that is overkill. It's definitely less power-efficient than a new Ryzen CPU.
I have an almost identical setup hardware-wise (I opt'd for dual 2687 v2s [better single core performance]) and it only cost me $1000. It also supports 20 physicals disks with no extra hardware or configuration needed (aside from just plugging in the disks and adding 'em to ZFS).
RAM (DDR3) and storage capacity (cheap SAS drives) in these systems is really where the savings is... I don't think I'll ever own a "regular" desktop again.
Well that's used stuff, so Moore's laws ghost still in effect.
The biggest problem I have with a used server is the inability to incorporate a GPU into the architecture. But I guess that really depend on what you want to do with it, serving a website will absolutely not require a GPU.
Neat machine, but 96W idle is a lot for a home server, IMO. Maybe you're somewhere where power's super cheap (and hopefully clean), but a lot of folks aren't.
I run my old desktop (a i7-6700K) in a rack in my basement, now, with 64GB of RAM, a Mellanox Connect-X for 10G networking, and half a dozen disks, and it idles under 15W. The entire rack, UniFi stuff/POE wifi APs and all, sits around 50W. 96W just for a single machine is A Lot.
In addition to wattage, my other concern for a home server was fan noise. If it's anywhere that I am, I really don't want to hear it. So I got these amazing Noctua fans and simply can't hear the server. (Updating language based on replies) A datacenter-like server probably isn't designed for that. So it wouldn't necessarily work out.
I have an 8 core i5. I bought Intel (18 months ago) because I didn't want to deal with any AMD incompatibility especially since I wanted to also run it as a gaming machine using VT-d. If I were doing it again, I'd definitely go with Ryzen.
I’m not sure what you’re getting at with the home server fans and not working out. I’ve been using a noctua on my Xeon home server without and issues, and reconfigured the fan sensors and control using ipmi. There’s no problem making a quiet Xeon home server.
Replacing the fans on server systems is tricky, many “quiet” fans are such because they have a lower max RPM which will freak the hell out of your management controller. Also, there’s only so much optimization you can do to 40mm fans in a 1U chassis.
The fans in my 1U dell servers are fairly quiet when idle around 3600RPM. They make noise, you wouldn’t want to be sleeping or watching TV in the same room - but with the door to my office closed they can’t be heard in the hallway.
Ah I understand what they were getting at. If you're using a server motherboard in a regular 1-4u case this becomes a different issue entirely. I'm using a supermicro x9 with dual xeons in a upright enthoo pro case with it's own fans. You're right about the management controller freaking out because the IPMI threshold settings are expecting the supermicro 3 fan assembly instead of different case fans. Fortunately with some tweaking and ipmi tool you can effectively use fans that have a much larger or smaller range of acceptable RPMS without your management controller thinking the fans have gone down into lower non recoverable.
Right now my noctuas are running about 1000rpm and keeping the Xeons around 40c (under load this will increase with minimal db)
My solution was to take some spare Noctua low-noise adapters (basically inline resistors with fan connectors attached) and just drop the max fan speed. The fan controller can freak out all it wants, it can never go above about half rpms, and it still generates a noticeable breeze through the hotswap bays.
My CPU is a 7100 (ECC supported!) with a Noctua L9i so I never have problems there either. Power draw is a little high at about 70W with 8 3.5" drives spinning, but most of that is the HDDs (rule of thumb is 5W per drive) and the alternative would be spinning them down, which isn't ideal.
The Intel ark site says the TDP for that xeon is at 115W, and that's two of them. When you're running at full power (which could happen) you're looking at the CPUs alone drawing over 200W. Another pointer to high power consumption is the 750W PSU. All in all we're not looking at a particularly efficient machine (even if it's unfair comparing it to 2019 hardware).
Running a PSU at ~50% of its rated max is generally recommendable. At 115W * 2, plus other draws (disk, mobos, blinkenlights), that's around 300W.
Again: peak draw, when on.
I'm not arguing that this system is particularly efficient, only that you don't want to add 750W for the PSU to the draw.
I've not specced out low-power systems myself. I doubt you could get 40 hyperthreads running way below this, though there are definitely some low-power systems which might have a total budget below 50W. Reddit's HomeLabPorn may have some more useful guidance: https://old.reddit.com/r/HomeLabPorn/
(Not my area of expertise, I've never really stayed current in HW. Though I'm aware power/thermal budget has been a major focus, both mobile and server, for the past decade or so.)
The 24 cores/48 threads EPYCs that come with 128mb L3 cache have a TDP that's between 155W and 180W. If you want to go with a lower core count (because you have a more modern architecture), you could even go with the 16c/32t EPYC 7282 (64MB L3) that sits at 120W. All in all, you're looking at almost half the power draw (AMD measures it differently than Intel, so aboutish the same) for the same performance.
I specifically mention the L3 cache size because, while not being substantial, a large cache can get you 10 to 20% improved performance due less CPU stalling from cache misses. For comparison, the Xeon in question has 25MB L3, so we'd be looking at 50MB cache, split in two dies (so it doesn't quite work as a whole block of 50mb cache).
You know, that raises a question - how expensive does power get, anyway? Last time I posted my home lab setup (couple of old R610s, Cisco small business switch, ASA 5510), I got a lot of pushback on how much power that consumes and how expensive it is to run. I did the math, and I was looking at maybe $20-$30/month depending on load. Which is not really a lot of money considering how much use I get out of it.
I'm definitely getting cheaper power than a lot of people at $.08/kWh, but it looks like the US average is only about $.12 - are there places where a couple hundred watts is going to be a significant financial burden on the average IT worker?
Some people like me will justify it due to being a huge hobby for them. It takes the place of what other people would spend going out often, traveling, streaming, etc...
I've got a rack that pulls ~600W 24/7 but that's across 4 machines, 3 UPS's, >30 HDD's yielding >20TB across multiple zpools, >256GB RAM. And I'm actually using most of that capacity, not just idling away.
I do hope to upgrade soon to power-sipping platforms like you mention but currently I'm still on R710/R810 stuff (Westmere Xeon's).
I wish I had something written up to share. But it's honestly a mix of things I love to play with. Several full blockchain nodes (especially uploading to light clients), bittorrent client (seeding tons), plex, home assistant, boinc, tor bridge, grafana, web and email servers, zoneminder CCTV, mandelbulber (clustered rendering of 3d fractals), four different Minecraft servers (some modded), ADS-B plane tracking, weather station and radiation monitoring, and a bunch more.
It's really just one big toy for me. I'm using Proxmox for high availability of over 30 VM's (still want to play with containers soon).
EDIT: OPNSense firewall says I uploaded over 15TB in the last month. Fortunately Google fiber doesn't care!
It's believable (if only obtained with a lot of BIOS tweaking), I run an older Dell 1u server as my router. It's got an E3-1220 V2 (2012 era, 69 watt TDP), one 2.5" SSD, 8gb of RAM and after a bit of tweaking it sits at around 30 watts.
Same here. Those things get toasty. I accidentally placed a RPi3 (passively cooled but with a metal case) on top of my USG once and it got hot enough not just to throttle it, but also to cause HDMI artifacting.
For a full desktop class chip, I think that's about as low as it can go. You could go lower with something like a NUC, but 15 watts is really good. 15 watts idle would cost something like $1.50 to $2 to run 24/7.
Ugh... that's crazy. Instead, look at ASRock X470D4U AM4 server motherboard (with IPMI and ECC) paired with something Ryzen 2700, which should give you near complete silence, 25W idle power usage and 8 modern cores boosting up to 4Ghz, for well under $1K
I got bitten by homelab fever few years back and got myself a small server. I had such grand dreams with it that never materialized. Now it sits unplugged in the corner deprecating itself :(
Stuff that I was planning to do:
* Managed VM platform (~"EC2")
* Centralized auth (FreeIPA)
* ZFS NAS (also possibly ceph) + backuping
* Container platform
* Your typical web/email stuff
* Monitoring/alerting/log management
* VPN endpoint (and other more advanced networking stuff)
* Probably something more I have already forgotten
I realized that building a private cloud actually takes serious effort and not just putting some lego pieces together. There is also bit of circulatory stuff there that makes bootstrapping more difficult, especially on one single box.
I provisioned a cloud in my home PC using the free open source equivalents of the Red Hat Cloud Suite. It's not trivial, but I did it in about two weeks.
Red Hat Cloud Forms -> ManageIQ (to manage your virtual private clouds, hypervisors, etc)
Red Hat OpenStack -> DevStack (the cloud itself)
Red Hat OpenShift -> OKC (container orchestration)
Red Hat Virtualization -> oVirt (for VM's)
Red Hat Ansible Tower -> AWX (to automate everything, including deploying all the previous software listed)
If you plan on doing all of this from one machine, understand that you will need to enable nested virtualization which will require some BIOS/OS configuration to make it work.
Try out yunohost. It makes this kind of thing trivial because they have gone and written all the config files and gotten single sign-on working with everything so all you have to do is push a button to install services.
This part, though, is really easy nowadays. FreeNAS is idiot friendly.
At some point, I need to migrate my current FreeBSD/ZFS setup to something newer, and I'll probably use FreeNAS next round simply because it's so much easier to manage. (Yes, I can do it from the command line--but I do it so rarely that I always have to go reload all the ZFS command set into my working memory.)
I've been pretty successful in meeting my private cloud dreams. I use FreeNAS as the bare metal OS, and run Ubuntu VMs which host Docker Containers (using NFS to keep actual persistent data on the underlying FreeNAS box).
Docker/Docker-Compose isn't quite lego.. but it's awfully close.
$1700 seems like an awful lot to spend on this setup, even with 1yr warranty. I recently bought the following from eBay on a £500 budget:
Asrock Rack c602 mobo
2x Xeon e3-2650Lv2 (20c/40t)
8x8GB Samsung DDR3 ECC
generic EATX case with fans
2x CPU heatsinks
3x case fans
XFX fanless modular PSU
if the price and specs alone aren't compelling enough:
it runs idle at ~50w
it has a similar passmark of >15000
it has 4x GbE ports
it has 4x PCIE3 slots (!)
I'd never heard of iLO, which other commenters mention as a selling point, but a quick search leads me to believe this is HP's take on IPMI, which this mobo has.
originally built as an HTPC server, I had 2 main criteria for my build: cool and quiet. hence opting for low powered processor versions, 0db PSU and PWM case fans. if you don't require these criteria you can knock ~20% off the budget.
there was so much power being unused that I binned a few other devices (namely crappy ISP-provided router and tv box) and made this build the heart of my home network. it is now my family's router, firewall, adBlocker, movie and tv server, game server, free cloud storage manager (synced to every household device), OS updates cache, music streamer, torrent client/server, VM server, web server, database server, VPN client, proxy server, etc., the list is virtually endless. these are all run simultaneously with ample resources leftover for frequent workstation usage.
I should admit that I thoroughly researched every component's specifications and price, and as such it took me around 3 months of waiting to source them.
I also admit this use case and learning curve is not for everyone, but it was ultimately a rewarding experience for both my brain and wallet.
yes I only really notice the single core performance on OpenVPN streams which, despite the CPUs' AES-NI and offloading to mobo's Intel-branded Ethernet chips, still cap out at ~80Mbps. it's fine for most internet needs and a few simultaneous video streams, but really bottlenecks torrenting and shifting large files around the web, on 1GBps FTTH. I toy with the idea of buying a generic fanless Chinese 4-port i7-8th Gen 15W 'U' variant to handle most networking duties, thus rendering the behemoth to be purely on-demand, WoL, semi-idle, etc. which would cut costs on electricity long term, but with such devices currently priced ~$300 and with Brexit threatening to bloat that, I am not in any rush. plus it gives me more time to research / discover / await / implement a multi-core VPN solution
For lab set ups "last year's" enterprise hardware can be a good deal. A couple of notes on the article
1) Setting up a RAID array isn't that difficult and it makes for more reliable storage.
2) Using dual supplies actually lowers the fan noise because the supply running at half power generates less heat than one running at full power. You can plug them both into the same outlet strip :-)
3) These things have a "lifetime" which is the point where things are easily found on the web which support them. And then they become "anchors" without all that support. Very carefully and diligently download and archive all of the necessary software, drivers, manuals, and extra cables for the system so that in another 5 years when it breaks you can reconstruct it successfully.
1) That is true. My storage is all SSD and I'm deploying everything through Ansible, I can always redeploy.
If I really need data backup, I can use one of the other SSDs or even the single spinning disk I put in as a backup target.
2) That's interesting, I don't think the noise comes from the PSU though, mainly the six case fans.
3) True point. I'm not too worried. The machine is fully supported by Linux (no drivers required) and the latest SPP is applied. I never expect to do hardware changes/upgrades down the road. And in five year's who then lives, deals with the shit ;-)
It's been a while since I was in a position where I regularly watched servers boot, but 2015ish I was doing a lot of 'boots on the ground' sysadmin and virtualization work. Servers do a lot of additional testing during boot-up, temperature sensors, RAID cache batteries, memory and RAID arrays are all checked. Some of those checks can be disabled, but you don't really reboot production servers regularly so you typically wouldn't want to. The extra 3 minutes of boot time is much easier to deal with than a bad host coming online.
On top of all that it's pretty standard (at least in the VMware world) to store the OS on an SD card. So the OS has to be read into memory and ESX is kinda slow to boot even if installed to a disk.
The reason it takes this long is that it does a bunch of self tests, and then it has to load all the ROMs for the components (NICs, HBAs, etc.) which often trigger messages like "X Loaded, press CTRL+L to configure" which stay on screen for 5-10 seconds each.
Yes and no. There is a longer boot time because the POST process is a bit more intense. ECC RAM is a big part of that. Chances are this particular caes is likely becuase the boot order needs to be reconfigured and it is hunting for PXE/network stuff before hitting a timeout?
If you yield control to some kind of Broadcom controller it'll do all kinds of shit before giving up and handing you back to boot a disk.
Unfortunately it is quite common that servers are slower to boot than desktops if it has a lot of CPUs, or a lot of memory, or just a very slow BIOS impl that spends a lot of time probing and initializing everything.
Servers with hundreds of cores are a lot slower to boot than my laptop too.
This is exactly what I will be doing next month. Moving into a house with electric heating, so I'll leave my desktop and home server on to mine cryptocurrency most of the time. Since the electricity will be effectively "free", I can make some profit. And since I can also deduct electricity used from taxed income (in my jurisdiction, "expenses for the production of income") that should in itself reduce heating costs around 25-30%.
They make water heaters now that use ambient heat in a room to transfer that heat to the water — I believe they’re essentially heat pumps. Maybe you could put your water heater in a closet with the server and make use of some of the wasted energy.
Sometimes I wonder whether going the "homelab" route would have been easier/cheaper for me. I built my server a couple of months ago, from scratch.
However, being forced to use a proprietary tool (ssacli) and limited drive compatibility do not sound desirable. This seems like an odd limitation - is this a normal thing with these types of projects/machines?
I didn't buy a single computer currently at home new - a few year old hardware is almost new and undistinguishable in performance. However I would stay away from servers - workstations (e.g. Z440) can be bought with almost the same hardware and for similar cost, yet are quiet.
been eyeing up workstations for a gaming rig. Second-hand, 6-12 months' warranty, a ton of CPUs, a ton of memory, Nvidia Quadro cards aren't directly equivalent to the same generation of GTX but your game will run just fine ...
aaand we just ordered a Dell T3610 (from 2014) with four-core Xeon E5-1620v2 and Quadro K4200 4GB (also from 2014) and 32GB RAM. Just under £600. The loved one is looking forward to her new video production workstation and gaming rig.
I bought a 2013 Era Workstation last year. (Dell Precision T3600) It had a 6 core xeon, and 32GB of ECC ram, and was $300 (I had my own hard drives, and got a decent video card)
it works great as a linux workstation, but its just so hot. My whole room is the warmest room in the house. I actually have to run a window air conditioner in the room to make it comfortable (I work from home, so use the office all day). I imagine its probably one of the largest power consumers (including the cost to cool with the window ac) in the house.
I look forward to tax season, I am going to replace it with a new AMD 3700x based system next year.
I picked up a Dell R820 with quad 8-core (E5-4650L) and 96GB (24x4GB) RAM for USD$700 in March this year. Because of the memory mezzanines, it’s only half full in this configuration. And if I find a decent deal on E5-46xx V2s at some point I could get up to 96 threads.
I even managed to get it to boot from a PCIe NVME drive with an internal USB stick running the Clover bootloader (yes, the Hackintosh bootloader) to bootstrap into Ubuntu. It makes for a great VM server.
Also helps that iDRAC 7 is aeons ahead of the horrible iDRAC 6 servers I was using before.
When I was looking at Monero mining ages ago, I built a rig using the Dell Poweredge R810. It has 4x CPU sockets in which I have Xeon E7-4860 (for mining that uses AES) that each have 10 cores/20 threads -- 40 cores/80 threads total. I found cheap deals on both scouring eBay. The entire setup cost me < $500.
However...it was noisy AF and consumed something like a few hundred Watts. Maybe as much as 500. Needless to say, I have not been running it. Does anyone have some tips on where to get cheap power? ;)
> Does anyone have some tips on where to get cheap power?
Do you live in a location that offers real-time pricing? Where I live, you can opt-in to such a scheme and then monitor an API from the power company and adjust/schedule your power usage to favor times in which electricity is very inexpensive.  Sometimes, you might even get paid to consume electricity:
> Negative Prices: With real-time hourly market prices, it is possible for the price of electricity to be negative for short periods of time. This typically occurs in the middle of the night and under certain circumstances when electricity supply is far greater than demand. In the market, some types of electricity generators cannot or prefer not to reduce electricity output for short periods of time when demand is insufficient, and as a result some generators may provide electricity to the market at prices below zero. Since Hourly Pricing participants pay the market price of electricity, they are actually being paid to use electricity during negative priced hours. Delivery charges still apply.
Depends on where you live, but round here, I use both the GPU mining rig and the home server (Dell R630) to heat my basement. For >70% of the year I would be running electric heaters down there anyway, so it's "free" electricity.
I'm always wondering what are people doing in these homelabs that they need this type of hardware. I worked for an MSP and sometimes people would take home an old DL380 or something, but it always seemed like a waste.
With a 16GB NUC I can easily provision 6-7 small vms without an issues which is enough for general self-host and exam prep. With docker you can run a simple instance of just about anything and leave it up all the time.
Many examples on reddit.com/r/homelab - including even more hungry (or several of them in a rack(s)) systems.
The short answer is Yes, servers built to be servers are designed to get the heat out and keep internal temp. down - so noisy fans and as much heat as you generate outside of the box.
Of course, nothing stops you replacing fans with quieter ones (at least one of more expensive or air movement) or putting consumer hardware (which has different design goals that you might prefer in the home) in a rack-mount chassis.
I live in a dorm room and keep an R610 about three meters from my bed. You can use some magical IPMI commands to disable the internal fan feedback loop and replace it with a custom one (usually the defaults cool the server to around 30C, which is not really necessary). This makes the server more quiet then the small fridge I also have in the room.
The 100W power consumption definitely makes the room warmer. It is approximately like having another person present.
> My very subjective opinion is that at 50 dB the sound level is reasonable for a server like this, but it's definitely not quiet. I would not be able to work, relax or sleep with this server in the same room.
> Although this server is fairly quiet at idle, it does need it's own dedicated room. When the door is closed, you won't hear it at idle, but under sustained load, you will hear high pitched fan noise even with a closed door.
I think for $1700 with the memory/storage capacity that's a great deal, but those benchmarks can be topped by the higher-tier consumer CPUs today.
The point on people being happy with slower CPU cores is kind of weird to bring up with a server. Most games don't push CPUs that hard, you usually need a really expensive GPU before you see noticeable benefits in gaming from faster processors.
Having done some core critical work for the last few years (media processing/systems programming), my recent upgrade from a 4th gen i7 to a Zen2 CPU is paying in spades. If I was building a server to do some of the batch processing stuff I'd like, I would definitely invest in a faster, cooler, more power efficient machine. But that's just me. I don't think I could beat that price point though.
Dunno... I have an R720 with similar specs (half the ram, though) and it was only $400. I also idle around ~100w because I ripped all the SAS 10k drives out and put SSD's in.
It sits powered-off most of the time though because I haven't been able to put it to good use, yet.
For a while it was running my Unifi controller + Pihole ... but you don't need the Unifi controller unless you are actively performing maintenance, and Pihole happily hums along on a Rpi 3 that uses far less power.