Posted in Computers, Hardware, Server

New Server – Lenovo TS140

It’s been 5 years since I purchased the HP N40L, so it was time to upgrade to a newer, faster system. The N40L is a nice unit in a small package – although a bit slow for my needs.

Enter the Lenovo TS140 70A4003AUX. I found a deal on Newegg that I couldn’t refuse, so I immediately added it to the cart and checked out before changing my mind.

There are a few models in the TS140 series. The one I chose has the following specs:

  • Intel Xeon E3-1226 v3 3.3GHz
  • 4GB 1600MHz RAM
  • Intel
  • No HDD
  • No OS

For the full spec list, search Google for Lenovo TS140 70A4003AUX.

Purpose

I purchased this server to replace my N40L, which was serving as my ESXi host. While the N40L did a great job, the 8GB RAM limit and slow CPU really limited this server to running a few VMs with light loads.

I installed ESXi 6.5 on the TS140, and plan to use it as my main (and only) ESXi host for my various testing VMs, my firewall/router, and as a testing/development environment.

Inside the Case

Immediately upon receiving it, I opened up the case to take a look inside. The cables were tidy, and it looks very clean.

Storage

I was mostly interested in an internal USB port for ESXi, and was disappointed to learn that this server does not have one. No problem, I’ll just use an external one.

There are 2 trays for 3.5″ hard drives, which can also be used for 2.5″ drives like SSDs with the addition of an SSD bracket. This is what I did, and I installed a spare 64GB SSD that I had. I will probably upgrade this to a larger SSD if necessary, but I think this should suffice for now.

The server also has an optical drive bay, which can be removed and in its place, a 2.5″ SSD or HDD bay can be installed. I may also go this route if I need the extra storage, but there’s no need for now, since all my storage is on a LUN on my Synology DS1512+.

Networking

There is only one network port, and this is shared with the  Intel AMT for remotely accessing and managing the server. I installed a 4-port Intel i350-T4 NIC that I had purchased for the N40L, so that takes care of the networking.

RAM

There’s not much that can be done with only 4GB of RAM these days, especially if the server is going to be used as a virtualization host. This server uses DDR3 ECC RAM. Since I was replacing the N40L, I removed the 8GB RAM that it had and installed it on the TS140, which is now running 12GB RAM on 3 slots.

This is not the best solution, however. The server maxes out at 32GB RAM and it has 4 slots total, so I’ll be replacing the RAM with some 8GB chips in the near future.

Power Supply

The power supply seems to use some special proprietary connector, instead of the regular ATX connectors that we commonly see on PSUs and motherboards. This worries me a bit, but I knew this in advance, so I’m going to live with it. The power supply is a fixed 285 watt Bronze – enough for my needs.

Cooling

The server came with plenty of low-noise fans – all covered with grills. The system is pretty quiet and stays cool.

Power Consumption

This is another big reason why I wanted to upgrade the older server and I refused to use my old PC as a server. The TS140 sips power, and sits at around 25 watt on idle after ESXi has loaded. I never saw my power meter go higher than 50 watt. Now, keep in mind that this is without mechanical drives and only 1 SSD – I’m sure the power consumption will be higher after adding a few mechanical drives.

Conclusion

I’m very happy with the Lenovo TS140 so far. It is a huge upgrade over my previous server, and it serves my needs very well for now.

Having a server-grade CPU and a motherboard with remote out of band capabilities make this server well worth it.

Advertisements
Posted in Computers, Fedora, Linux

VMware Tools Cannot Find Kernel-Headers on Fedora 18 x64

I recently installed Fedora 18 x64 on VMware Workstation 9, and was unable to initially complete the VMware Tools installation using the same methods that I’d previously done many, many times with prior Fedora installations. The installer kept telling me that it couldn’t find the kernel-headers folder. I had installed the development tools with my Fedora install, and they were all up to date, so I was a bit puzzled.

Prior to installing the VMware Tools, you need to install the Fedora development tools if you don’t have them – or if you’re unsure, just check – otherwise the installer will complain that it cannot find something, and will ask you to provide a path.

The development tools needed are: gcc, make, binutils, kernel-devel, kernel-headers

I also recommend updating the existing kernel to match the versions from kernel-devel and kernel-headers.

  1. Update your kernel, restart the vm after the installation: # yum update kernel
  2. Install the development tools, restart the vm when finished: # yum install gcc make binutils kernel-devel kernel-headers
  3. Run the ./vmware-install.pl script, accepting all the defaults (unless you know what you’re doing and want or need to change something)

If the script complains that it cannot find the location of the kernel-headers – and you verify that they are installed by typing # rpm  -qa, then you must copy the kernel-headers from one location to another. Find out which kernel you’re using with # uname -a. The current kernel on my system as of 01-19-2013 is 3.7.2-201.fc18.x86_64

Run the following command to copy the folder from one location to another location, which is where the installer is looking for those header files.

# cp /usr/src/kernels/3.7.2-201.fc18.x86_64/include/generated/uapi/linux/version.h /lib/modules/3.7.2-201.fc18.x86_64/build/include/linux/

If you’re running the script, you can type that path into the installer where it asks for the header-files location. If you’re not running the installer script, run it again, and it should find the path automatically.

Thanks to user jgkirk from the VMware forums for this tip. The original post that helped me can be found here.

Posted in Computers, Windows 8

Windows 8 Consumer Preview on SSD

I just installed Windows 8 Consumer Preview Build 8250 on my system after playing with it on a virtual machine for a while. While configuring my system, looking at different settings and such, I noticed that the defrag screen recognizes my SSD as a solid state drive, and tells me that it needs optimization.

I thought defragmenting a solid state drive was a bad idea? Windows 7 disables this by default if it recognizes the drive as SSD.

I’m not sure how different the defrag program on Windows 8 is from different versions, so I’m a little concerned about letting it optimize my SSD.

The SSD is the Agility 3 60GB. See image below.

Has anyone else come across this, and if so, what have you done?

Posted in Computers, Hardware

HP P212 Smart Array Controller Performance Test on HP MicroServer N40L

Here are the results of a speed test after installing the HP P212 Smart Array controller on the HP MicroServer N40L. The OS is installed on the 250 GB drive, which I moved to the ODD slot position and is connected to the motherboard via the internal SATA port. The BIOS was upgraded with a modded version to unlock the SATA ports from IDE configuration.

Network speeds top 125MB/s when copying or moving large files (4GB) such as ISOs or MKVs. Smaller files such as pictures, mp3s, documents copy slower from a networked PC running Windows 7 x64, connected to an 8-port TrendNet Gigabit switch, speeds range between 30MB/s and 40MB/s, but mostly in the 60-70MB/s range; sometimes higher, all depends on amount of files and their size.

The server is running Windows Server 2008 R2 Enterprise.

Additional specs:

  • 8GB ECC Kingston RAM
  • HP P212 Smart Array Controller with 256 MB and BBWC
  • 3 Samsung HD204UI disks on raid 5 connected to the P212 controller, 512KB stripe size, cache settings are 25% read & 75% write. Write-back enabled.

The system is configured and running. It only has the file and printer sharing roles installed,  indexing for certain folders (paused for the tests), CrashPlan Desktop+ (on sleep mode for the tests). The server was idle, showing no CPU or disk load during these tests.

The default settings for ATTO were used, only changing the drive to be tested.

Clicking on each image will open it up to its full size.

Test 1

Test 2

Test 3

Not sure why test 3 was the slowest, even after trying a few times. There was no load on the server at all.

If I’m not mistaken and from what I’ve read online, I believe that adding an extra disk to the raid 5 array should improve speeds, but so far, this is pretty satisfactory for me for now, especially when using “green” disks which run at 5900 RPM.

If anyone out there has any recommendations for getting the best performance from this raid controller, please share them with me. I’m not sure if the current stripe settings of 512KB or the cache settings offer the best performance for my needs, which are general file & multimedia server for a small home network. If anyone would like me to run any additional tests in ATTO by using different settings, let me know. Also please feel free to share any other mods or ideas you may have for the MicroServer.

I plan to post additional details on the installation and setup of the card, with some more pictures.

Posted in Computers, Networking, Windows Server

My Comments on an HP MicroServer N40L

I’ve been playing with an HP MicroServer N40L for a few weeks now. Initially, I installed Ubuntu 11.10 and tried it out for a few days, but wiped it and installed Windows Server 2008 R2. I plan to use this machine as a file and media server, so I’m exploring a few options out there to give me the most flexibility and allow me to do what I want.

While reading a few blogs and forum posts, I’ve noticed most people – or at least a lot of them – are using WHS 2011 on their MicroServer. I don’t have a license for WHS 2011, but do have licenses for Server 2008 R2, so that’s what I’m using so far. In addition to using it as a file server, I would like to do some minor virtualization, mainly to separate the main OS from the media apps.

After upgrading the RAM from its initial 2GB to 4GB, I tested ESXi 5, which ran okay (a little slow using local storage), but not being able to use local disks as pass-through disks for the virtual machines was a big turn off, so I discarded that idea.

I’ve been running Hyper-V Server – initially on a full Windows installation, but then decided to try it on Core, and so far it’s been great. One of the best things about Hyper-V that fits my needs is the ability to use a local disk as pass-through. This way, I can install the OS on a VHD and use pass-through disks for storage. Been testing this for a few days, and I don’t see much (if any) of a performance hit when copying files from the network. When copying large files from a networked PC to the virtual machine’s pass-through disk, file speeds range from 20-110MB/s+, really depends on the kind of file I’m copying. Large files such as ISOs or MKVs are the fastest to copy, mostly at 90MB/s+.

Not all is nice and pretty with Hyper-V on Server Core, as it initially requires a bit more work to get properly configured and running, but once it’s up, it’s a “set it and forget it” kind of thing. I may post my installation and configuration notes for Hyper-V on Server Core 2008 R2 from beginning to end in the near future.

Posted in Solaris, Unix

Shut Down Solaris 11 Express

Here’s a quick command on how to shut down – or power off – a Solaris machine. I tried this using Solaris 11 Express, but have also tested it to work on OpenSolaris and OpenIndiana.

From the terminal:

$ sudo shutdown -y -i5 -g0

This is what it means:

– sudo: Run the command with elevated privileges. Not needed if logged in as root

– shutdown -y: Confirm that you DO want to shut down the system

– i5: init level 5: Power off the machine.

– g0: (it’s not “go”, it’s gzero). Shut down the machine immediately without a grace period. Increase the number to delay the shutdown by n amount of seconds. I always use 0 seconds on my Solaris server.