4 Setup Capture Computer Workstation

The computer shipped to me has been installed with CentOS 6.5 as I requested. The OS sits on a 32GB Solid State Boot Device. Here is the kernel info.

# uname -r

This a standard release kernel. I did not do any update.

I list here some of the steps I performed to this computer.

  1. Power up. I see it boots up normally and I get a login prompt. There is no graphic interface of X server installed. So that means I'll have to do every configuration from command line. I feel excited, as it will test how much I still remember this Linux stuff, I am more a FreeBSD person.

  2. Install SolarFlare card into PCI-E 3 X8 slot. Reboot. After booting up, I see the card from `dmesg`. And I can see that the Linux RPM driver for this card has also being installed, upon my request to iXsystems.com.

# lsmod | grep sfc
    sfc_char               30522  1 onload
    sfc_resource          127811  2 onload,sfc_char
    sfc_affinity           11108  1 sfc_resource
    sfc                   390525  3 onload,sfc_resource,sfc_affinity
    i2c_algo_bit            5935  1 sfc
    mdio                    4769  1 sfc
    sfc_tune               23485  0 
    i2c_core               31084  3 sfc,i2c_algo_bit,i2c_i801
    ptp                     9614  2 sfc,e1000e

    You will see less lines here, as I list here is after I installed the final driver. Initial RPM driver gives less line here.

  3. Install this computer box into existing FEI rack in TF30 room. Luckily, there are still two slots to allow me to put both new computers in. I should have ordered the side rack mounting brackets. This is a 4U case, rack mountable. Fortunitely, there are strong supports for them in the rack. They fit in the rack nicely.

  4. Configure 1 1Gb NIC interface and to include it into our existing network of EM suite There are 3 ethernet ports on motherboard, 2 from Chelsio card and another 2 from SolarFlare card. It took me some time to figure out which one is which. Once that is known, configuring it becomes easy.

# cat /etc/sysconfig/network-scripts/ifcfg-eth2
    # ifup eth2
    # ifconfig eth2
    eth2      Link encap:Ethernet  HWaddr 0C:C4:7A:00:BD:2E  
              inet addr:  Bcast:  Mask:
              inet6 addr: fe80::ec4:7aff:fe00:bd2e/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:114424 errors:0 dropped:0 overruns:0 frame:0
              TX packets:26777 errors:0 dropped:0 overruns:0 carrier:0 
              collisions:0 txqueuelen:1000 
              RX bytes:90517336 (86.3 MiB)  TX bytes:3560653 (3.3 MiB)
              Interrupt:18 Memory:fbd00000-fbd20000 
  5. Install some packages. Later, we need to compile the SolarFlare driver, the kernel-devel is the most important one to install.

# yum install kernel-devel 

    And I also installed some packages which are needed for full kernel source installation. Some of them might not be neccesary, as complation of SolarFlare driver doesn't really need full kernel source. But there is what I also did:

# yum install rpm-build redhat-rpm-config asciidoc hmaccalc perl-ExtUtils-Embed xmlto
    # yum install audit-libs-devel binutils-devel elfutils-devel elfutils-libelf-devel
    # yum install newt-devel python-devel zlib-devel

    I also install a couple of packages for my own convenience.

# yum -y install man wget 

    I like tmux a lot, so also want it. But it is not yet on CentOS yum system. So I have to install the rpm package.

# rpm -Uvh http://pkgs.repoforge.org/tmux/tmux-1.6-1.el6.rf.x86_64.rpm
  6. Now, download openonload stable driver and see if we can compile it.

# wget http://www.openonload.org/download/openonload-201310-u2.tgz
    # tar xvzf openonload-201310-u2.tgz
    # cd openonload-201310-u2/scripts
    # ./onload_install

    If that is successful without error message, then it is good. We are ready to move to next step. You might have to install some -libgcc for i686 etc., for 32-bit compatibility. I don't know if it is needed or not, but I included this anyway.

# yum -y install glibc-devel.i686
  7. Install the Rackmount Bracket from BlackBox to the rack.

  8. Now install the optical tap to Rackmount Bracket. An image shown its mounting and optical cable connection to the tap is below.

    Now lets connect the optical cables. First disconnect the LC connector of the purple color optical cable from FEI Falcon Controller box, and connect it to port A on optical tap. Then use one LC optical cable to connect from Falcon Controller box to port B on the tap. Last, using another LC optical cable, one end goes to port "monitor" on the tap, other end is split to two single fibrils by take them out from the side-release clip. These two fibrils go to receiving port on each of the two ports on the SolarFlare card. The receiving port is the one near "Act" on the card bracket.

    As soon as these two fibrils are inserted into receiving ports, the LED on the card light up immediately.

  9. It things are setup correctly, Falcon camera should behave just like it always has been. So take an image on microscope to verify it.

  10. Now everything looks good, we can bring up the 10GbE Chelsio card ethernet interface. There are two ports on the card, we only need one. Lets find the card and driver info from `dmesg`.

# dmesg | grep eth0 
            cxgb4 0000:02:00.4: eth0: Chelsio T420-CR rev 2 10GBASE-R SFP+ RNIC PCIe x8 5 GT/s MSI-X
            cxgb4 0000:02:00.4: eth0: S/N: PT38130118, E/C: 01234567890123

    We need to assigned the IP address of this card to a different subnet from 1GbE one. We can give it as After editing the config file, bring it up using "ifup eth0" or reboot.

# cat /etc/sysconfig/network-scripts/ifcfg-eth0 

    This is to connect to FreeNAS file storage tank directly, not for connecting to internet. Therefore, no need to define route and DNS etc.. You can see it from "ifconfig eth0" as below:

# ifconfig eth0
    eth0      Link encap:Ethernet  HWaddr 00:07:43:15:36:90
              inet addr:  Bcast:  Mask:
              inet6 addr: fe80::207:43ff:fe15:3690/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:7956918 errors:0 dropped:0 overruns:0 frame:0
              TX packets:43729928 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:3940525722 (3.6 GiB)  TX bytes:65301388658 (60.8 GiB)

    You can also use "ethtool to see its speed".

# ethtool eth0 
    Settings for eth0:
            Supported ports: [ FIBRE ]
            Supported link modes:   Not reported
            Supported pause frame use: No
            Supports auto-negotiation: No
            Advertised link modes:  Not reported
            Advertised pause frame use: No
            Advertised auto-negotiation: No
            Speed: 10000Mb/s
            Duplex: Full
            Port: Direct Attach Copper
            PHYAD: 0
            Transceiver: internal
            Auto-negotiation: off
            Supports Wake-on: bg
            Wake-on: d
            Current message level: 0x000000ff (255) 
                                             drv probe link timer ifdown ifup rx_err tx_err
            Link detected: yes
  11. It is the time to prepare the disk array. The three SSD drives arrived to have software RAID partition created using GPT. This was upon my request as I want to have them configured as RAID0, but the raid array has not been configured or mounted. We have to do it ourselves.

    When I talked to iXsystems.com and told them that I wanted SSD drives to be configured as RAID0, they told me that RAID0 is not recommended. It is true that RAID0 is less safe for redundancy, but since the images setting on them are only for short period of time before swapping out the larger data tank, I still want RAID0. Considering factors of performance, speed and security, I believe RAID0 is a good choice.

    Since I have to do it from command line, I have to install mdadm package first.

# yum install mdadm  

    From dmesg, I know the three devices related to three SSD driver are ada, adb, adc. So I first created a file /etc/mdadm.conf containing following line:

DEVICE /dev/sd[abc]
    ARRAY /dev/md0 devices=/dev/sda,/dev/sdb,/dev/sdc

    Prior to the creation or usage of any RAID devices, the /proc/mdstat file shows no active RAID devices:

  Personalities :
      read_ahead not set
      Event: 0
      unused devices: none

    Now, using above configuration and command "mdadm" to create raid0 array:

# mdadm -C /dev/md0 --level=raid0 --raid-devices=3 /dev/sda /dev/sdb /dev/sdc 
        Continue creating array? yes
        mdadm: array /dev/md0 started.

    If my SSD has not been prepared with raid partitions, the above command would have done the job. Since I create the new raid devices on pre-existed raid disks, I found that I have to "assemble" them.

# mdadm --assemble /dev/md0 /dev/ada /dev/adb /dev/adc 

    And I also need to make new filesystem on the new raid0 array:

# mkfs -f ext4 /dev/md0 

    And then I can mount it.

# mkdir /mnt/SSD_RAID  
    # mount -t ext4 /dev/md0 /mnt/SSD_RAID 

    /etc/fstab is needed to update by adding the line, so that the raid array can be mounted when boot.

 /dev/md0                /mnt/SSD_RAID           ext4    defaults        1 1

    Now we can see the raid status from /proc/mdstat

# cat /proc/mdstat 
       Personalities : [raid0]
       md0 : active raid0 sdc[2] sda[0] sdb[1]
             1465159680 blocks super 1.2 512k chunks
             unused devices: none

    And from command "mdadm --detail /dev/md0":

# mdadm --detail /dev/md0 
            Version : 1.2
      Creation Time : Sun Feb 23 13:07:23 2014
         Raid Level : raid0
         Array Size : 1465159680 (1397.29 GiB 1500.32 GB)
       Raid Devices : 3
      Total Devices : 3
        Persistence : Superblock is persistent
        Update Time : Sun Feb 23 13:07:23 2014
              State : clean 
     Active Devices : 3
    Working Devices : 3
     Failed Devices : 0
      Spare Devices : 0
         Chunk Size : 512K
               Name : localhost.localdomain:0  (local to host localhost.localdomain)
               UUID : 555833b1:d9b30cd7:f7e1d2b0:933f23e4
             Events : 0
        Number   Major   Minor   RaidDevice State
           0       8        0        0      active sync   /dev/sda
           1       8       16        1      active sync   /dev/sdb
           2       8       32        2      active sync   /dev/sdc
  12. Now mount falcon folder on Tecnai computer so capture program can access to the Falcon Gain Reference file and Sensor Defection file. First, at Tecnai computer turn on share for the folder C:\tecnai\data\falcon, and give it share name as Falcon. Then, install CIFS packages on CentOS Linux computer.

# yum -y install cifs

    And we mount it on:

# mkdir /mnt/falcon
    # mount -t cifs // -o username=emuser,password="password-for-emuser" /mnt/falcon

    Here, emuser is a dummy userid existed on Tecnai computer.

    We also need to update /etc/fstab to include it

//   /mnt/falcon  cifs  username=emuser,password=passwd-for-emuser 0 0  
  13. We need to prepare the memory allocation for capture program. Currently, on our system we have following three lines in /etc/sysctl.conf.

vm.nr_hugepages = 6144
    kernel.shmmax = 38927335424
    kernel.shmall = 38927335424

    After modifying /etc/sysctl.conf, the computer needs to reboot to take the change into effect.