Building a Dell Server For Home Use
Building a Dell R720XD with ZFS, RAID, Docker, and Ubuntu
Image by Bruno /Germany from Pixabay
I’m a software engineer by trade but have professionally been a jack of all trades adjacent to that realm. This has included time as a system administrator, network architect, VOIP and general digital communications engineer, and other jobs.
For years, I’ve been using an old desktop computer running 24/7 at home that I updated over time to accomplish many goals. It’s done well running as:
- a fileserver to store backups of our photographs and videos
- a web server to host web sites as I build them at home
- a plex media streamer to be able to access our digital movies and tv shows from anywhere
- a mythbuntu tv host and dvr to manage our over the air TV
- a project box for compiling and testing code and deployments for work
- and many other things over the years …
This desktop computer server had a 4 disk RAID 5 volume, ran the OS off an SSD (backed up to the raid weekly), and utilized 16GB of memory. Although this system has served its purpose, it has slowly become a mess of configurations. So for me that meant that it was time for a rebuild and upgrade.
With the help of some friends, Reddit’s /r/homelab, a very helpful guru over at Art of Server, and a lot of Google, I delved into the world of used enterprise servers repurposed for home use. The computers themselves aren’t that expensive with the largest part of the cost being the hard drives, and those I would have had to buy to expand my storage anyway.
As with many things, I went into this with some basic needs and a desire to learn. In this article, I’ll walk you through what I did right and what I did wrong. Please keep in mind that this is more of a journey of discovery than an expert in homelab conversions giving advice. I’ve put together something that is relatively novel outside the world of select IT professionals (though quite common in /r/homelab), and I hope it will continue to be a great platform to learn and expand my knowledge.
Photo by Author … Where I started
How many boxes? How big?
I debated between two main setups … either get two servers to separate out the Network Attached Storage (NAS is a service that lets you host and share files among many computers on your network) component of my setup, or get a single server to do everything. The upside to separation is that upgrades and configurations in each server don’t affect the other, and it’s easier to keep your data safer if data is on one machine and other services are on another machine. The downsides are higher power usage, noise, and cost.
My plan this time around was to separate all the different projects into their own Docker containers. That would let me keep the configuration of each project very isolated and give me a way to manage projects very simply. It would also make it possible for me to connect a lot of storage for personal media and backups.
Sidebar on backups:
A home server isn’t a backup “solution” by itself, but it can be a big step in the right direction. The storage media on the server will have redundancy built into it by using the right ZFS or RAID setup, which helps protect against failures of any single disk. If you also have the media you’re backing up on other machines, you’ve spread your risk fairly well. If you’re trying to preserve very important data, however, you should always also have an offline and/or off-site backup of that data.
For snapshot backups of other computers in our home, of media files that we can recreate if needed (copy back to the filesystem from the original DVDs, for example), or even of photos that we also store in the cloud, it’s a great part of the total backup solution.
I started investigating servers and costs and the best bang for the buck for me was the Dell “20” generation (420, 520, 620, 720, etc). These were new enough that I could run a slightly more modern CPU and RAM, but not so new that they cost thousands of dollars. I could buy most bare-bones servers used for a few hundred dollars. These are servers that come with the capability of over a dozen computer cores, hundreds of gigabytes of memory, tens of terabytes of storage, and high speed networking across multiple network cards. Overkill for me? Sure, but should also last me a very long time and not cost much more than my existing solution.
I started looking at how I wanted to build the storage into the system. I was introduced to ZFS a while ago, but didn’t have a way to convert at the time so I never tried it out. This was an opportunity and the more I learned about ZFS, the more I wanted to use it. However, just like a software raid setup, Ubuntu (and other OSs) can sometimes have problems being installed directly to and booting from a software disk volume. They’re getting better, but I wanted a solution I wasn’t going to have to “figure out again” if a major update came out that changed the rules. I really wanted a hardware RAID setup for the OS and ZFS for the storage drives, and that’s tricky because you can’t have a RAID controller card set up to do both at the same time. You need two cards and two locations to connect disks. Unfortunately, on most of these boxes, you also can’t separate the drives out … they’re all connected to a single controller.
That left me with only a few options. Either I don’t use RAID on my OS and boot drive, I use RAID rather than ZFS for all my drives, or I have multiple RAID controller cards and independent drive “planes” … something many servers don’t offer.
Dell R720XD
Enter the Dell R720XD. That “XD”? Extra Disks. It has a rear panel where you can put two more 2.5” drives and while it’s not an “officially supported” configuration, you can disconnect that rear panel from the front drive plane and have them controlled by a second disk controller.
This server can have twelve 3.5” drives up front (or 24 2.5” drives) and two 2.5” drives in the rear, which is a huge amount of storage for a home server. A bit ridiculous for a home server to be honest, but this is as much about room to grow as it is about learning new things. It also had all the power of the Dell R720, which was one of the workhorse servers I was considering.
I realized with the storage capacity and power of the R720XD that it both had a large number of hard drive bays available and was a great computational resource. It was more than powerful enough to do everything I could want and it felt a bit wasteful to use it just as a NAS, so I went with just it as a single server solution instead of trying to build out two servers.
I specced it out with one eye towards the price and the other towards power consumption. My hope was to make it as thermally efficient as it allowed for so it would be ok sitting in a basement stairwell (or my office) and not need the fans loudly spinning at full speed all the time.
For me that was this configuration:
- Large Form Factor front drives (12 3.5” drives)
- 2 Intel Xeon E5-2650v2: 8 cores each, 2.6GHz, 95W
- 8x16GB PC3L-12800 (256GB)
- PERC H310 Mono Mini flashed to IT mode for ZFS
- H310 PCIe for OS RAID 1
- Rear flex bay with two 600GB 10k 2.5” SAS drives
- Intel X540-T2, dual 10GB, dual 1GB NICs
- iDRAC 7 Express
- Dual 1100W Platinum redundant PSU
- And the bezel because it’s pretty
The H310’s were picked after a lot of googling, some videos from Art of Server, and speaking with Mr Art of Server directly when ordering parts on his ebay store. In order to use ZFS, I needed a PERC card in JBOD (Just a Bunch Of Disks) mode, which mostly requires flashing cards to IT mode … a mildly risky procedure to do yourself, but cheap to buy pre-flashed. For the RAID system, I could use almost any quality disk controller. For my goals with my disk speeds, the H310 was perfect for both use cases to save time and energy without sacrificing the performance of my disks.
The choices of RAM and CPU were also for a balance of power usage, performance, and cost. I opted for 3.5” drives up front for the ease of being able to get them at any big box store. I got 7200 rpm drives for this, though I suspect 5400 rpm with ZFS would have been fine. If you want the speed, by all means go for the 2.5” SAS drives. I didn’t believe I needed that. All in all, it’s still a very strong system. More than adequate for most home use and my hope was that it wouldn’t run so hot during normal use that the fans would be loud all the time.
I also have to give a major shout to Orange Computers. I ordered most of my parts and some used hard drives from them. They did a great job finding components, discussing what might and might not be covered, and did cover returns and replacements on some of the used disks when I later had a problem (the replacements have been going strong for months now). I felt much more comfortable ordering from a company that had a good reputation and a decent warranty even if it cost a bit more than piecing things together from eBay.
Initial Setup: Physical Configuration and BIOS
Photo by Author … the pieces
The plan was in place. Time to assemble! With enterprise servers this step is intentionally very easy. I definitely encourage you to read instructions (note the instructions are also visible inside of the case cover), but almost everything in the computer is easy to attach or detach with some clips or levers. For what I was doing, I don’t even recall if I had to use a screwdriver.
The instructions say to first power it on for about 15 minutes so the power supplies can configure themselves. I set it up with no hard drives and no extra RAID card, so I just turned it on to watch what it would do. Sure enough, it took about 15 minutes, cycled through several tests, and screamed like an airplane taking off for most of that time. I actually left the room after a few minutes and closed the door.
It was abusively loud. Definitely made me nervous that I’d made a poor decision. But by then I had no choice but to continue and see where it led.
“If you are going through hell, keep going.”
― Winston S. Churchill
Next up was installing the PCIe card and the SAS drives.
Photo by Author … PCIe RAID card in upper left over the rear backplane
This was the best slot I found for the card. A short cable was all that was required (in this case, a SFF-8087 to SFF-8087 Mini SAS cable) and connecting it to the rear backplane. There was some online discussion I saw about also disconnecting a backplane sensor cable, but either it was already done for me, perhaps from discussions with Orange Computers, or I didn’t need to do it in order to get all my blinky lights the right colors.
With the rear SAS drives installed and the PCIe RAID controller installed on the rear backplane, I booted up again and configured the rear bay for RAID1. In the device settings, these showed up as “Slot6 DELL PERC H310”. Very simple to set this part up.
Next, I powered down and installed the IT Mode Mini Mono H310 and the 12 3.5” drives. When I booted up again though, I didn’t see any of the drives even though it saw the H310 as an Integrated LSI SAS2 MPT Controller. I tried tweaking some things, but eventually it was turning off the integrated raid controller (in the BIOS integrated devices settings), rebooting, re-enabling that, and rebooting again that let me see all 12 disks show up in the boot sequence and under the details of the card.
Quiet Please! Making the R720XD Audibly Tolerable
The R720XD is loud by default. I knew it would take some modification to make it quieter. If nothing else worked, I was prepared to work towards modifying the fans or replacing them with lower RPM “silent” models. That’s a bit hacky, even for me, since it would effectively disable several thermal features on the server and while you don’t usually need those, when you do, you really need them.
A great writeup I found was on the unraid forums here:
I first went through the process of updating the firmware on the box. Easy to do via the BIOS and I figured that I could always try and downgrade it to specific versions if I found that to be the right answer. However, up to date firmware is usually a good thing for many reasons.
The next steps I took were:
- Go to IDRAC thermal and set to Performance Per Watt and 50 degrees C (no lower option). Note: Plug in an ethernet connection to the back-right plug, that’s the IDRAC port.
- Under power, set the power supply to redundant, enable hot spare and enable power factor correction so just one PSU is going to be used at a time. This did cause it to quiet down a little.
- I disabled the Lifecycle Controller gathering information on every reboot. Changing a single disk could make that sit and spin for twenty minutes or longer, which I didn’t like (or need). It sped up reboot times, and I’m not expecting any new firmware updates from Dell for this model, so I’m going to leave it this way for now.
At this point, it still seemed disappointingly loud. Looking in IDRAC it seems the fans were spinning at about 50%, but when I rebooted into the USB boot for Ubuntu 20.04 because I’d been booting the system on and off for hours and hadn’t yet tested the OS at all, the fans quieted noticeably! Maybe only twice as loud as my previous desktop server. Success!
In this mode, it’s running at 266 watts (idle), PSU2 is drawing 0 amps, CPU1 and 2 are sitting at 61C (142F) and 49C (120F) degrees respectively, exhaust temp is 37C (98F) degrees, and this isn’t in a super cold room with 25C (77F) degree inlet temps on this summer day with a server blasting hot air into it all morning.
There are a lot of posts out there with “solutions” to quiet down one of these machines. They range from fairly simple (go to a firmware version that enables lower speed fans), to slightly more extreme (replace all fans with quieter, slower running fans), to very extreme (hack the temperature sensors themselves). All of them have potential drawbacks and I’d rather keep things as up to date and standard as possible. As it stands, I rarely see the fans go past 30% with my workloads and usually they stay around 22%. The cheap decibel meter on my phone tells me that ten feet away it’s 35 db and one foot away it’s 45 db. Not bad at all.
Installing Ubuntu Server and Decreasing Reboot Times
Why an Ubuntu Server, you ask? Why not? It’s one of the most supported and used linux distros, it works well with ZFS, has regular updates, and a relatively minimal footprint.
Installing Ubuntu Server on the rear RAID drives was easy. Make a bootable USB drive and plug it in (I used the front panel USB just to try it and it worked great). Go to the boot menu during boot (F11), use the UEFI menu, select the removable drive, and select Install Ubuntu. It’s going to check a lot of hardware and this seems to take a good bit longer than it takes on a regular desktop. Make sure to install to the RAID volume instead of one of the front panel drives. During the install, my system exhaust temperature remained about 40 degrees C, well below the 50C that the fans are trying to keep it under. Fans hovered between 22% and 27% speeds the entire time.
One note: The install appeared to get stuck on “Install Curtin Hook”. That will keep the install wheel spinning forever. But if you press Tab and go to “View Full Log”, you will see “All Upgrades Installed” when it’s finished, and then can reboot by hitting enter on “Reboot Now” under “View Full Log”. Make sure you’re set to boot from UEFI instead of BIOS in the BIOS boot settings and you’ll be off and running!
I do also like to have a graphical environment available (mostly for gparted emergencies, though also sometimes to lazily mount an external drive), so I added in xubuntu while I also set up my vimrc and bashrc the way I like them. Having to go through a few reboots pointed me to my next curiosity “problem” … that it takes about 12 minutes to reboot. A lot of that is Dell’s time investigating hardware, but several minutes is Ubuntu sitting at “wait for network to be configured” or “waiting for network configuration”.
The solution here seemed to be to set the three(!) ethernet devices I wasn’t using to be optional. Go into /etc/netplan/00-installer-config.yaml and add “optional: yes” to everything but the first ethernet port.
This final fix got my reboot time (time from issuance of reboot command to when I could use the OS again) down to about 6 minutes. Still not fast by desktop standards, but way faster than the 12 to 15 minute boot times I was seeing earlier.
ZFS: Setup, Failed Drive Replacement, and Reporting
On to installing ZFS. The software comes as a package, which is great because it should mean it stays up to date along with changes to the OS.
sudo apt install zfsutils-linux
After this, find out what disks I have:
fdisk -l smartctl -a /dev/sdX
I wrote down a mapping of serial numbers, bays, and disk identifiers from smartctl and fdisk. This will let me build vdevs based on the ID of the drives rather than the device names, which can change on boot when other items are added. For this box, you can add one drive at a time without rebooting (hot swappable!), check fdisk -l for the disk ID and the device name (it should be the next letter in the alphabet each time), then use smartctl to get the serial number. Or you can just write down the serial numbers for each drive and check fdisk for each drive you add to match them up.
I made a table of bays so when I got an error, I could easily go find the drive that had a problem and pull it without much extra inspection. For example:
0 1 2 3
4 5 6 7
8 9 10 11
I added one drive at a time and then filled out information for each of them (for example):
Bay 6: 4TB
/dev/disk/by-id/scsi-SATA_ST4000NM0033-9ZM_S1Z2BN8J
Disk identifier: FACF252C-D595-C745-A990-730AE390BD6A
Model Family: Seagate Constellation ES.3
Serial Number: S1Z2BN8J
You most of get that information from
sudo smartctl -a /dev/sd<a-z>
To get the disk identifier you can run:
ls /dev/disk/by-id/ | grep <serialnumber>
These drives needed a partition table, which is easy enough to add from the command line as well.
parted /dev/sdd $ mklabel GPT $ quit
Clear any existing partitions for zfs
sfdisk --delete /dev/sdd
Once GPT is added to each disk and it’s otherwise empty of partitions, it’s time to create the zfs. I stuck with the common “tank” name for fun and to start experimenting with. I had a few left-over 3TB drives and my 12 4TB server drives from 3 different batches and vendors … a “security” measure so you don’t reach end of life on all your drives at the same time because they’re all identical. I wanted to test the ability to grow a vdev, so I loaded up with vdevs of 3,3,4,4,4,4 and 4,4,4,4,4,4 TB drives.
zpool create tank raidz2 /dev/disk/by-id/[id0] /dev/disk/by-id/[id1] <etc> -f zpool add tank raidz2 /dev/disk/by-id/[id6] /dev/disk/by-id/[id7] <etc> -f
Next, I blindly tuned it a little based on several articles I read. One good writeup being here:
zfs set mountpoint=/data tank
zfs set xattr=sa tank
zfs set acltype=posixacl tank
zfs set atime=off tank
zfs set relatime=off tank
zfs set compression=lz4 tank
zpool set autoexpand=on tank
First I tested that ZFS was correctly remounted on reboot, and it automagically was for me. This is a common problem in configuration, though, so if you’re having trouble check out:
Checking the sizes:
zfs list NAME USED AVAIL REFER MOUNTPOINT tank 21.9G 24.6T 21.9G /data zpool list NAME SIZE ALLOC FREE tank 38.2T 32.9G 38.2T
The command “zfs list” shows the amount of space available. This contains my two oddly sized vdevs. 4*3.65TB + 4*2.75TB plus some overhead. The odd numbers instead of 4TB and 3TB are because of how hard disk manufacturers generally use Megabytes (multiples of 1000) and the operating system reports Mebibytes (multiples of 1024).
The command “zpool list” shows the total space available of 6*2.75TB + 6*3.65TB plus a little overhead.
Most of the time, we’ll care about “zfs list”.
Testing Read/Write Speed
First, as a quick “real world” test, I copied a 7.7GB file from the zfs pool to the raid array, which took 10.2 seconds. I copied it from the raid array to the zfs pool, which took 8.9 seconds. That’s pretty quick and demonstrates read/write from/to both devices at the same time. Neato, but time to isolate and test!
I performed tests with dd and with fio to get read/write speeds for single access as well as multiple access.
My zfs array reported write speeds up to 1.4GB/s with single-access writes of medium to large data sizes (batches of 64k to 1M), fairly slow speeds of about 50MB/s for very small batch sizes (1k to 4k), but fast speeds of 550MB/s to 670MB/s when up to 16 threads were writing the file at the same time regardless of batch size (1k to 1M).
For read speeds, the slowest I got was with single access reads of small batches (1k to 4k) at 280MB/s. But with larger batches or multi-thread reads, I got between 2.1GB/s and 5GB/s!
Those numbers worked for me.
During all of this, the fans remained at about 25%, only occasionally hitting a threshold that brought them up to 30%. Still fairly quiet overall.
Next, I tested with a linux laptop storing data on an ssd and using ftp and sftp. I got 100.4MB/s and 98.8MB/s transfer speeds (bi-directionally). Comparing to my 1Gbps switch, that’s approximately 800Mbps each direction (each byte being 8 bits) and with the overhead of tcp and ftp/sftp, I definitely felt I was reaching pretty close to max transfer speed. Excellent!
Degrading a VDEV
Now it’s time to test both degrading my vdev and growing my vdev to its full capacity. I want to try these things before I get all the data on there to make sure I’ve got everything set up correctly. What a disaster if I find out five years from now that I made a simple mistake and I can’t easily grow my vdevs or that somehow I’ve managed to make it difficult to replace a broken drive!
First, I check that everything is healthy with:
zpool status
It tells me everything is online and there are no reported errors.
I’m going to go ahead and start with raidz2-0 drive Z1F2P2TB. I know from my notes when I installed everything that is the drive in bay 0. This was labeled /dev/sdb, but again, using the drive ID to uniquely identify them.
zpool offline tank scsi-SATA_ST3000DM001-1CH1_Z1F2P2TB
Then I check status again and find the state for both tank and raidz2-0 is DEGRADED (with raidz2-1 listed as ONLINE). It happily tells me that the pool is continuing to function in a degraded state. Perfect.
Growing a VDEV
Now I need to find the new disk. I can read the serial number as Z1Z78C3E, so I’ll check with fdisk and smartctl both /dev/sdb (when I swap it) and /dev/disk/by-id/ otherwise. Once found, I’ll just update my notes on which drive is in bay 0 and in raidz2-0. I’ll also use parted once again to delete partitions and / or make a gpt partition table so I will have a clean disk with a disk identifier to load into the vdev.
Checking my zpool status after physically swapping the drive, I still see the old disk listed as offline and no mention of the new disk. Time to replace it!
zpool replace tank scsi-SATA_ST3000DM001-1CH1_Z1F2P2TB /dev/disk/by-id/scsi-SATA_ST4000NM0033-9ZM_Z1Z78C3E
Now when checking the status, it’s still degraded, but says it is waiting to resilver the drive. In my case, within seconds, I saw the status get updated to ONLINE.
In this case it only had to resilver 2.21GB. That’s about 1/10th of the total data currently stored in the zpool since I had almost no data on the drives when I’d just booted up. Compare that to resilvering of a raid array (where you have to generally resilver the entire size of the disk regardless of how much data you’ve put on the disk) and you start to see some of the benefit of ZFS.
I replaced the other 3TB drive so all drives were 4TB and followed the same process. Once again, within a few seconds, the disk was resilvered.
Now let’s see if autoexpand worked its magic:
zfs list NAME USED AVAIL REFER MOUNTPOINT tank 21.9G 28.1T 21.9G /data zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT tank 43.7T 32.9G 43.6T - - 0% 0% 1.00x ONLINE -
Yes! I’m up to 43.6TB of total space (approximately 3.65*12) and 28.1TB of usable space (approximately 3.65*8 plus some overhead). Autoexpand worked!
I got to replace multiple disks a second time … after loading up 12TB of data ... as three of four disks I ordered from Amazon failed and one of eight disks I ordered from orange computers failed. Thankfully, only one disk completely failed in the first vdev, so it was fine. The second vdev had two disks fail and one go to degraded mode. That was scary and prompted me to set up email warnings so any time a disk had issues in the future I’d be notified. Nothing brought them back. Luckily, with one disk degraded and two failed, my data was still fine so I didn’t have to start over from my backups.
A note on used drives. I bought these used drives knowing it was a risk. The Amazon seller had no warranty, but given the failure tolerance, I was hoping to save some money on the first months to year of operation. That bit me. The disk that failed from orange computers was warrantied and they replaced it for me.
I replaced just one disk on the second vdev and let it resilver, which took about 3 hours (for 12.5TB). Then I replaced the others one at a time (doing a replace command kicks off the resilver, and rather than put multiples online at the same time, I thought I’d let each process finish before I did the next one).
Hot swapping drives is awesome.
Setting up ZED (ZFS Event Daemon) to only email important warnings
With that scare, I knew I needed to get scans and email warnings set up. This turned out to be relatively easy with only a couple gotchas to learn from.
The JBOD disks in my ZFS array needed a single line of configuration:
DEVICESCAN -H -l error -l selftest -f -s (S/../../6/03) -m <myemail> -M exec /usr/share/smartmontools/smartd-runner
This is set up to monitor the disks for SMART issues and run short scans once a week. Finally, it executes my smartd-runner script (I’ll get to that in a second).
The raid disks took me longer to figure out. Turns out in Ubuntu, you can use megaraid for them and set up smart tests like this:
/dev/sda -d megaraid,0 -H -l error -l selftest -f -s (S/../../6/03) -m <myemail> -M exec /usr/share/smartmontools/smartd-runner /dev/sda -d megaraid,1 -H -l error -l selftest -f -s (S/../../6/03) -m <myemail> -M exec /usr/share/smartmontools/smartd-runner
Devicescan scans all the zfs drives and megaraid scans the raid drives. Now smartctl is scanning and emailing me, zfs is scrubbing and emailing … all good for notification and I can have either sas drive fail and any two drives on either of the 6 disk vdevs or either of the raid drives fail and I’ll get an email.
This has been tested over the past months both with pulling drives and with drives having issues ranging from failing to simple SMART errors.
In fact, some of the errors I got meant a drive was slowly failing but hadn’t failed yet. Given my fault tolerance (and the non-criticalness of my data) I want to let those go all the way to failure before I replace them as long as only one drive is failing. But I don’t need to be emailed daily on the same drive. After two weeks of messaging about a single drive having a SMART issue, I added that smartd-runner script that only sends a message if the message is going to be different than the message it last sent. I’m including that script here:
<script src="https://gist.github.com/amclauth/f5749d024eb4145ea13e3cce436743ad.js"></script>
The rest is history
After that, I set up Docker, NGINX, and started building containers of all the services I wanted to run, all set up behind NGINX port forwarding. It’s been brilliant. Each service and project is isolated and upgrading any one project or project’s dependencies has no effect on the others.
I have tons of storage that’s failure tolerant and continue to get practice with ZFS, containers, and web services. The box is strong enough to crunch some serious lines of code when I’m programming and either processing data or compiling.
The only real thing left to do is finish my basement so I can move the box out of my office and into a dedicated “networking” closet (under the stairs)!
Citizen Upgrade is a community of experts covering technology, society, and personal development. Visit us at our website, on Facebook, or on Twitter. Join our mailing list to access more great content and other helpful resources.
We are working to provide quality content and apps for free. Donations can help keep them that way!
Popular Posts
How To: Make a Property Maintenance Log Book
Track and Schedule Upkeep for Your Home or Real Estate Portfolio
We Wrote a Cookbook!
Cooking for Grownups Now Available on Amazon
Happiness Advice the World Needs Right Now
Science-based practices that benefit you and everyone around you
Recent Posts
How To Avoid Exposure to PFAS aka The Forever Chemicals
Protect yourself from the health risks of PFAS chemicals
Upgrade Your Home with Simple Interior Design Updates
7 Steps to a Stylish and Comfortable Home
20 Best Housewarming Gifts for New Homeowners in 2023
From the thoughtful to the practical, we have you covered!
How to Stage Your Home for Sale, A Step-by-Step Guide to Getting Multiple Offers and a Quick Sale
Declutter, Clean, and Make Upgrades to Increase Your Home's Appeal to Buyers
Wabi Sabi- The Japanese Philosophy of Beauty in Imperfection
Apply these life lessons for happiness, fulfillment, and inner peace