Adding Swift to DevStack

If you find yourself needing to enable Swift on DevStack and you already have DevStack deployed and running and do not want to redo the whole thing I can relate. I found myself in this exact spot tonight. By default DevStack does not enable Swift, but its really trivial to enable it. The tricky part may be once its enabled how do you make it work without just redoing the whole thing?

Getting Swift enabled is pretty simple. First open your local.conf file in your /home/stack/devstack directory using your favorite editor. Next add the following lines to it:
enable_service s-proxy s-object s-container s-account
And save the file. If you havent built your DevStack yet now you can simply proceed with ./stack.sh If you have already built your DevStack and its up and running simply proceed with ./unstack.sh and when that completes then proceed with the ./stack.sh During the install it will prompt you for a hash/passphrase and I just used the same password I used for the initial DevStack install. Once this process finishes you should have Swift added to your DevStack. Happy Stacking!

pyvmomi now available in Fedora 19 20 and 21

Back in 2005 I got into making RPMs for Fedora, and by 2006 I became a sponsored packager. It was a hobby I really enjoyed, but back in 2009 or 2010 I just got too busy with work and had to pass the baton. About a month and a half ago I was looking for issues on the pyvmomi project that I could help close when I found this one asking for help making an RPM for Fedora. I got excited all over again about making RPMs so I made one and headed down the path to become sponsored again to package for Fedora. I created this bug report and became re-sponsored, and after a lengthy process I am happy to report that pyvmomi is now available in Fedora 19, 20, and 21. It is still in the testing phases for EPEL 6 and 7 but in another week or so should be available there as well. This puts us one step closer to being able to port the current OpenStack driver for vSphere to pyvmomi!

Virtualbox + Vagrant + VCSA + VCSIM To Automate Your vCenter Test Environment

Just The Facts..

Before we even get started I have to say this. What follows in this article is 100% completely unsupported in anyway form or fashion. You should only even consider doing this if you are in a situation like me where you need to do some form of automated testing. Some of these things may even violate the EULA of one or more technologies used. Last but not least: In no way is this something my employer (current or former) endorses doing or had any role in. This mess is all mine.

What Problem Are You Trying To Solve Here?

I am a software developer. I work full time developing applications that interface directly with various VMware (and at times OpenStack) technologies. While working on the VMware stuff I almost always need a vCenter with a complete inventory to test my code with. Thankfully people like William Lam have created wonderful blog posts that have helped me get a lot of what I needed done with out even having to do anything myself except modify a script to meet my needs. This blog post is going to take what I have learned from doing this stuff manually and wrapping it all up into Vagrant using Virtualbox and a couple of command line tools from VMware.

Press The Any Key?

If you have ever tried running the VCSA on Virtualbox you will have seen this message for sure:

    The profile does not allow you to run the products on this system.
    Proceeding to run this installation will leave you in an unsupported state and might impact your compliance requirements.

You will also know how annoying it is to have to press any key to continue during subsequent boots post install. These issues make it very difficult to create automated test environemts that you can spin up and down as need. Thankfully the VCSA is a Linux box so if you are crafty you can do some things to bypass these checks before they ever happen.

What Are All Those Tools in Your Toolbox?

First to keep the messages above from ever showing in the first place we have to modify the system files on the VMDK of the system disk. The first tool Ill be using is one supplied by VMware. Its called vmware-mount, and its part of the vSphere Virtual Disk Development Kit (vddk from here out). For some reason it got
removed from the 5.5 kit but in my testing thus far the 5.1 kit has worked fine on my 5.5 VCSA setup. I wont be discussing how to setup the vddk or the prereqs it needs for it to work. Please use the documentation provided by VMware for that. Next Ill be using Virtualbox, Vagrant, and then some scripts from William Lam to customize the VCSA and configure VCSIM. In my environment I will be using Debian Linux Wheezy 7.5, Virtualbox 4.3.12, Vagrant 1.6.3, and VMware-vCenter-Server-Appliance-5.5.0.10000-1624811 You should download these things and install them before you get started. The VCSA only needs to be downloaded, but Virtualbox and Vagrant, and the vddk will need to be installed and configured. This process may work on other versions but YMMV

Everything Has A Place, And There Is A Place For Everything

Before we start making any changes I found this whole process works best if you import the VCSA OVF into Virtualbox first. To do that start Virtualbox.Next click on the File menu, and select Import appliance. Now navigate to where you downloaded the VCSA files from VMware. Select the OFV file and follow the instructions to complete the process. You do not need to power on nor should you power on the newly created VM. If you skip this step you can end up struggling with your mounted VMDK going read-only as soon you as try to make any changes to files on its file system further along in this process. With that out of the way I am now going to move on to the changes that have to be made.

Change, The Only Constant

With all the prereqs out of the way lets get this party started. We need to start by using vmware-mount to mount the VCSA system VMDK so we can make some changes to bypass those pesky messages I discussed in the begining of this article. I store my Virtualbox files in ~/Virtualbox Vms/vmware/vcsa

    cd ~/Virtualbox Vms/vmware/vcsa
    mkdir /tmp/vcsa
    sudo vmware-mount VMware-vCenter-Server-Appliance-5.5.0.10000-1624811-system.vmdk 3 /tmp/vcsa

Now with the disk mounted we can make some changes to the proper files. There are several files we are interested in. For Vagrant to work happily we need to make some changes to the sshd_config file. Next to stop the pesky messages we get on boot we need to modify a file called boot.compliance. Lets makes these changes now. Lets start with sshd_config. In this file we want to enable password auth, turn off DNS lookups.

    vim /tmp/vcsa/etc/ssh/sshd_config

In this file look for the following lines to edit so they look like the following lines:

    PasswordAuthentication yes
    UseDNS no
    #AllowGroups shellaccess wheel

Just a note on the PasswordAuthentication. It is optional and honestly not used in this process. We will be using keys, I only enable it for some other not discussed in this blog stuff. First off I found this value listed 2 times. The first value was set to yes already, then near the bottom of the file it was set again and was set to no. I simply removed the second entry leaving only the first one. Leaving it to no does not keep ssh password auth from working. The sshd_config documentation can be confusing here because you can set this to no and still get prompted for a user name and password. This is normally from the keyboard-interactive login method (which still uses a password which is where people can get confused). There is a really boring RFC you can read about it here if you need some help getting to sleep some time. UseDNS no tells the sshd server not to do a reverse lookup of people trying to connect. Since this will be running from my own local host on virtualbox there is no need for the DNS lookup to happen. The last line removes the requirement for being in the shellaccess or wheel group. Next lets shut up the warnings about being unsupported, and disable having to press the any key. For that we have to edit that boot.compliance file.

    vim /tmp/vcsa/etc/init.d/boot.compliance

In this file look for the following lines and make them look like this:

    MSG=`/usr/bin/isCompliant -q`
    CODE=0

Here I add the -q which tells the isCompliant script (which is GPL code and looks to be part of the base SUSE install) to run in quiet mode and not output any of the messages it finds when something isnt compliant. Next I hard code CODE=0 because I know that isCompliant will return a non 0 status causing yet another message to pop up on screen. This is the message asking us to “press any key to continue”. Setting this value to 0 makes that message not show and the boot process to continue with out human intervention being required.

Next I need to add the vagrant user. Normally I might use something like adduser/useradd but here I will just make the changes those tools do manually. To do that Ill be editing the passwd and shadow files by hand.

    vim /tmp/vcsa/etc/passwd

Here I will add the following line to the bottom of the file:

    vagrant:x:1519:100::/home/vagrant:/bin/bash

For an explination of what each of the colon seperated sections mean see the passwd(5) Linux man page. Next I edit the shadow file.

    vim /tmp/vcsa/etc/shadow

I add the following info to the file and save it. FYI The shadow file is read only even for root so I have to force the save.

    vagrant:$6$gxhfnuHm$FMUpGdh2kca12joVHFFV33bhJSvxE5xQWAn3RzkQmP1X/KeckBW2ODcYhe2a2y5D3ROUTtMDgu0Djzlpz4E7B/:16271:1:60:7:::

Here I am setting the password to for the vagrant user to vagrant. For a full explaination of what each colon seperated section means see the shadow(5) Linux man page. Now I need to add a home directory and copy in the basic files that come with a home directory. Those files are typically located in /etc/skel and on the VCSA that is no exception.

    mkdir -p /tmp/vcsa/home/vagrant/.ssh
    cp /tmp/vcsa/etc/skel/.{b*,e*,i*,pro*,vim*} /tmp/vcsa/home/vagrant
    chown -R 1519:100 /tmp/vcsa/home/vagrant
    chmod 700 /tmp/vcsa/home/vagrant/.ssh

Now I need to add the vagrant insecure ssh key to the authorized_keys

    wget https://raw.githubusercontent.com/mitchellh/vagrant/master/keys/vagrant.pub -O /tmp/vcsa/home/vagrant/.ssh/authorized_keys
    chown 1519:100 /tmp/vcsa/home/vagrant/.ssh/authorized_keys
    chmod 600 /tmp/vcsa/home/vagrant/.ssh/authorized_keys

Finally I need to add the vagrant user to the sudoers file and give them full access with out a password.

    visudo -f /tmp/vcsa/etc/sudoers
    vagrant ALL=(ALL) NOPASSWD: ALL

The Light At The End Of The Tunnel

We are getting close to being finished. Now there are only a few steps left. The next few steps took me some time. Not because they are hard but because finding the packages needed to do them was not easy. Almost everything I am doing is to make it so I can use Vagrant and Virtualbox. After having done all this Im still not 100% sure it was worth all this effort, but I digress. Lets move on to finish this.

The last few steps are:

  1. Unmount the system vmdk we mounted earlier.
  2. Start the VCSA (but do not do anything other than power it on).
  3. Install packages to allow you to build kernel modules
  4. Install VBoxLinuxAdditions into VCSA
  5. Power off VCSA & Export into Vagrant

First I unmount the vmdk using vmware-mount -x, next I power on my VCSA. Once its booted I will log in as root using the default password it has of “vmware”. Next using zypper I install make and gcc:

    zypper --gpg-auto-import-keys ar http://download.opensuse.org/distribution/11.4/repo/oss/ 11.4
    zypper --gpg-auto-import-keys ar http://download.opensuse.org/update/11.4/ 11.4-updates
    zypper install make gcc

When I run this command the system tells me there is a problem and I can pick an option to get past it. I pick option 1 which is downgrade binutils. Once that install finished it warned me about needing to restart but I ignored that because Im only going to install a couple more things then power it off anyway. Here comes the hard part (at least for me it was). Locate and install kernel-source-3.0.101-0.5.1 kernel-default-devel-3.0.101-0.5.1. For me the locating of those packages was the most difficult thing in this whole process. You cant use zypper to install this, at least not using the repos from above. I honestly found these packages on an FTP server in Russia.. Seriously it was not fun tracking them down. Once located download them and install them using:

    rpm -Uvh kernel-*

Now you are finally ready to build the Virtualbox Guest Additions. This will allow you to use port forwarding in Vagrant, as well as shared folders and many other features of Vagrant. To install the guest additions you need to copy the VBoxLinuxAdditions.run up to the VCSA. The file can be found in the VBoxGuestAdditions.iso which will then need to be mounted somewhere so you can get the file off of it you need. I did this by doing:

    mkdir /tmp/guest_add
    mount -o loop /usr/share/virtualbox/VBoxGuestAdditions.iso /tmp/guest_add/

Next I did a scp to place the file on the VCSA. Next simply set the file executable and execute it:

    chmod +x VBoxLinuxAdditions.run
    ./VBoxLinuxAdditions.run --nox11

Once this process finishes you are done-ish.

Wrapping it all up

All that is left now is to power off the VCSA, and export the image into vagrant. I do that by running:

    vagrant package --base VMware_vCenter_Server_Appliance

Where VMware_vCenter_Server_Appliance is the name I gave the image in Virtualbox when I imported it. This process on my laptop which is a Dell with an i-7, 32G of RAM, and an SSD took 27 mins. Once it completes you need to import this into vagrant. I did that by doing this:

    vagrant box add vcsa-5.5 package.box

This process never took more than 5 mins. Next If you use my vagrant files you can almost vagrant up. I say almost because I hit a snag where I couldnt get the vCenter configuration script to run properly with out doing this goofy hack. Once you have my files you can vagrant up like this and it will work:

     vagrant up && ssh -p 2222 -o "StrictHostKeyChecking=no" -i vagrant root@127.0.0.1 < provision/configure_vcsa.sh

Thats it. Now just wait like 20-50 mins for it to completely complete and you can log in to your VCSA at: https://localhost:8443/ (user administrator@vsphere.local password: password this is also where to hit for the API access into vCenter) for the vCenter and the web client is at https://localhost:9443/ (user administrator@vsphere.local password: password) and you can access the config page at https://localhost:5480/ (user root password vmware)

The End

Thanks for going through this. I know it was long. It took me a while to get this all working and I tried to keep this blog to only important bits about the process. Again I must say you are 100% not supported by doing this stuff. First the VCSIM is not supported but furthermore making all these crazy changes like I did to the VCSA really goes beyond the realm of no support. If you have question or comments about this process please feel free to reach out to me. I will be glad to help where I can, but dont read into this as me offering to support you if you do this.. Its a very advanced topic and one that should only be attempted if you are comfortable doing this stuff.

How to install Python 2.7 on CentOS 6.x

I needed to install python 2.7 on a CentOS server. I did some searching and found some very broken scripts on github. I picked one of them and started hacking til I got it working. You can now find it here: centos_python_env_setup To use it, you can simply grab the raw version using wget, set the script executable with chmod, then as root run it. I tested this script about 25 times using my Rackspace Cloud server and picking the CentOS 6.4 option. Please let me know if you have any issues running it.

Switching Falconstor IPStor to use 1 to 1 mapping for LUN assignment

Switching your Falconstor IPStor server to use 1 to 1 mapping from any of the other avaliable mapping options Like All to All or All to 1 can be an annoying task, but if you have VMWare ESX or ESXi and have VMotion you can do this with 0 down time and Ill show you how. These steps will assume you are NOT in a fail over cluster with Falconstor

In my example I am using 2 ESX 4.1 hosts. Monster01 and Monster02. Each one of my hosts is designed to support running all our virtual machines from so moving all the vms to a single host like this is not going to have a negative impact on any vm and users will never know this is going on.

First Im going to need to vmotion all the machines to a single host. I move all the virtual machines running on Monster01 to the host Monster02. Once this is done I put Monster01 in maintence mode, then shut it down. This step is only needed if you are SAN booting your ESX/i host and need to change the LUN the ESX/i OS is running on.

Once its shut down comes the hard part if no one ever documented what WWPN is used for what. If that is the case for you like it was for me then you can figure this out by looking at your physical adapters and finding all the adapters that are in target mode. Write down the WWPN of each one. You might have a bunch.. Thankfully I only had 2 adapters in terget mode that were online. Yay!! a 50/50 chance of getting it right on the first try! I took LUN 0 which has my ESX 4.1 install on it and did a 1 to 1 map. On the ESX host this was simple since we only had 1 adapter plugged in and configred. Next I had to take a shot in the dark because our cables are a mess and no one documented what WWPN was used for what. I got it wrong the first time because once I assigned the LUN and turned the host back on the box failed
to boot. I switched to my other choice and Bam it booted!

Next to quickly switch the other 40 LUNS I went to the Falconstor management console and went into the SAN Clients and selected Monster01 I right clicked on each LUN and selected properties. From here you can switch from your current mapping to 1 to 1 using a select box. You will get a warning about data transmission stopping when you do this. That is fine since there are no running virtual machines on Monster01 (remember its still in maint mode) Next you select the 2 WWPNs needed for the initiator (the ESX host) and the target (the Falconstor server). Once you have completed this for all LUNs you can bring the ESX host out of maint mode and VMotion the machines from Monster02 to Monster01 and then repeat this process on Monster02.

Add a new disk to a volume group for use with lvm

Today I built a new server for us to start testing out splunk. When I built it I used LVM. I also agreed to the default layout using split partitions on the Debian 6 install. So I ended up with a partition for / /var and /home and /tmp. I always like splitting partitions up like this, and I ALWAYS use LVM. Once the new server was booted up and ready for me to begin the splunk install I did so using their deb package. That put everything for splunk into /opt which did not have its own partition, and as a result I needed to add a new disk, and create a partition that could be mounted to /opt. Since I used LVM this became a trivial task. I will show you the steps to take if you need to do what I did.

First we need to stop splunk and move it out of /opt for a moment.
/opt/splunk/bin/splunk stop
Once you see this info you should be ready to move splunk:

Stopping splunkweb…
Stopping splunkd…
Shutting down. Please wait, as this may take a few minutes.
.
Stopping splunk helpers…

Done.

Next we will move the splunk install to /tmp while we do this disk stuff. If you do not move it you will notice once the new opt is created that splunk is gone (sort of, its really just hidden), so its best to just move it for now.
mv /opt/splunk /tmp
This will move the splunk install to /tmp.
Next, I am going to assume your new disk is already accessible to the system. If its not then make it that way, so we can partition it with fdisk
fdisk /dev/sdb
sdb Was the disk I added to my system. Here I am going to create a new primary partition and set its type to 8e which is Linux LVM. I want to use the whole disk but you may want to only allocate a small amount of it.. Once you have written the changes and fdisk is done its time to begin the LVM steps.
The first step is to create a disk using pvcreate
pvcreate /dev/sdb1
You should get a message like this:

Physical volume “/dev/sdb1” successfully created

If so we can move on to the next step. In my setup I want to add this new disk to the existing volume group which is named splunk. To do that I will use vgextend
vgextend splunk /dev/sdb1
If the command was successful you should be greeted with the following message:

Volume group “splunk” successfully extended

You can verify this using vgdisplay
vgdisplay splunk
You should see something similar showing the new Free PE or the Total PE as being larger. In my case the Total increased by 10G, and the Free shows I now have 10G free.

— Volume group —
VG Name splunk
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 8
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 6
Open LV 6
Max PV 0
Cur PV 2
Act PV 2
VG Size 29.75 GiB
PE Size 4.00 MiB
Total PE 7617
Alloc PE / Size 5058 / 19.76 GiB
Free PE / Size 2559 / 10.00 GiB
VG UUID DtBgbf-pDPY-JLFo-cjv2-ydUC-nyiH-lzqzys

Now I can create a new logical volume. I am only going to use 8 of the 10G I added, and name it “opt” since that will be its mount point:
lvcreate -L 8G -nopt splunk
If that command was successful you will see a message like mine:

Logical volume “opt” created

You can look at the info about the new volume using lvdisplay
lvdisplay
Here is the opt section of my output

— Logical volume —
LV Name /dev/splunk/opt
VG Name splunk
LV UUID 5fAiFp-fpmd-dlUa-mgHl-nNcy-kX9G-RLHDaC
LV Write Access read/write
LV Status available
# open 0
LV Size 8.00 GiB
Current LE 2048
Segments 1
Allocation inherit
Read ahead sectors auto
– currently set to 256
Block device 254:6

Now all thats left is adding a filesystem to this volume and mounting it. I am using ext3, so I will use the following command:
mke2fs -j /dev/mapper/splunk-opt
After the normal output from mke2fs I need to amend my fstab to auto mount my new filesystem for me on boot.
$EDITOR /etc/fstab
I will add the following to that file:

/dev/mapper/splunk-opt /opt ext3 defaults 0 2

Save this file. Now mount the new file system. You should also move anything else that is currently in /opt to /tmp while you do this or it will also appear to be gone.. To mount the new file system simply:
mount -a
This will mount the new logical volume to /opt for you. Verify that with df or mount. Once its verified that it is in fact mounted you can safely move all your data back to /opt
mv /tmp/splunk /opt
Now you can start splunk back up
/opt/splunk/bin/splunk start
All done.

Slicehost is going away

Im sure its not even news anymore that Rackspace has decided to roll all the slicehost accounts into Rackspace Cloud accounts. Its no surprise. If you were the type to follow what had been going on with RS Cloud and Slicehost you would have seen like me that Slicehost never really changed after Rackspace bought them, but the RS Cloud and apps like their iphone app were having improvements constantly; while Slicehost had little to no changes happening, and their Android App even vanished from the market. I even asked the developer if he would open source the code like they have done with many of their other projects. It never happened, so that made me wonder.. Do they hate Slicehost or are they not willing to share that info for some reason… OR were they just gearing up to drop support for Slicehost all together.. seems like it was the later. I dont blame them, I might have done the same thing in their shoes. Once it was announced that Slicehost was going away TONS of people were pissed about it and bitching up a storm on twitter and on IRC. Many of the people I talked to about this didnt even look into what was changing. They were just having knee jerk reactions that involved leaving Slicehost for some other VPS. I decided I am going to stick with them for now, and in the mean time I am checking out a few other providers and in a few months I will decide which one I will stick with. I also went over to xtranormal and made a handy video on my take of the changes. Here it is for your viewing pleasure.

Monitor a directory or file for changes on Linux using inotify

Many times you may need to watch or monitor a file or directory for changes. When you look around for how to do this you will see MANY people suggesting that you use lsof and that does work very well. Another solution is the use of incron. This solution works far better for the cases where I have needed to know more of what is going on with a file or directory.

Installing incron is simple. On Debian and Ubuntu a simple

apt-get install incron

will get it done. Once it is installed you can add your users who will be allowed to use incron to the /etc/incron/incron.allow file. Thats it your user should now be able to use incrontab -e. For my examples I will be using root, and all the scripts I make will be for root only.

Next its time to talk a little about incron. It will work sort of the same as a regular crontab except that it is event driven and not time scheduled, so the syntax of the file will be a bit different. It is in the format:

<path> <mask> <command>

Where <path> is an absolute path on the file system that will be monitored. <mask> is an event symbol. For a full list of event symbols see man 5 incrontab. <command> is the command that will run when the event in the mask section happens. For this example I am going to use the IN_ALL_EVENTS mask, and show you a simple shell script you can use to just log some basic info about whats going on in our watched directory.

Lets make our shell script. Using your favorite text editor create the file /usr/local/sbin/incron_logger.sh In that file add the following:

#!/bin/bash
logfile=/var/log/inotify_backup_changes_test.log
path=$1
file=$2
event=$3
datetime=`date --rfc-3339=ns`
echo "${datetime} Change made in path: " ${path} >> ${logfile}
echo "${datetime} Change made to file: " ${file} >> ${logfile}
echo "${datetime} Change made due to event: " ${event} >> ${logfile}
echo "${datetime} End" >> ${logfile}

Save this file and chmod 700 the file. Next still as root type:

incrontab -e

The first thing we add is the path, in my case I want to monitor /tmp/watch_me This is just a directory in /tmp that I have created for the purpose of this demo. Next I will add the IN_ALL_EVENTS mask, and finally the command /usr/local/sbin/incron_logger.sh and then some args to call the script with. My completed incrontab entry looks as follows:

/tmp/watch_me IN_ALL_EVENTS /usr/local/sbin/incron_logger.sh $@ $# $%

The symbols added after the command are for the path,event related file name, and finally the event flag that triggered the command to run. Lets save this entry and go create some files and see what happens.

First I will create a file:
touch /tmp/watch_me/testfile1
Now lets take a look at the log:
cat /var/log/inotify_backup_changes_test.log
You should now see:

2011-02-25 10:05:18.450733000-06:00 Change made in path: /tmp/watch_me
2011-02-25 10:05:18.450733000-06:00 Change made to file: testfile1
2011-02-25 10:05:18.450733000-06:00 Change made due to event: IN_ATTRIB
2011-02-25 10:05:18.450733000-06:00 End
2011-02-25 10:05:18.454378794-06:00 Change made in path: /tmp/watch_me
2011-02-25 10:05:18.454378794-06:00 Change made to file: testfile1
2011-02-25 10:05:18.454378794-06:00 Change made due to event: IN_OPEN
2011-02-25 10:05:18.454378794-06:00 End
2011-02-25 10:05:18.456283635-06:00 Change made in path: /tmp/watch_me
2011-02-25 10:05:18.456283635-06:00 Change made to file: testfile1
2011-02-25 10:05:18.456283635-06:00 Change made due to event: IN_CREATE
2011-02-25 10:05:18.456283635-06:00 End
2011-02-25 10:05:18.457849952-06:00 Change made in path: /tmp/watch_me
2011-02-25 10:05:18.457849952-06:00 Change made to file: testfile1
2011-02-25 10:05:18.457849952-06:00 Change made due to event: IN_CLOSE_WRITE
2011-02-25 10:05:18.457849952-06:00 End

As you can see from this log we know when the file is opened for writing to make it, and then we know as soon as its been closed. This can be handy for many things, Ill let your imagination run wild on what you can do with this. If you have any questions or comments please feel free to ask and I will do my best to answer them.

hylafax+debian or Ubuntu = No font metric information found

We were recently going through a server migration. Moving from FreeBSD to Ubuntu Linux. One of the apps required to be moved was hylafax. On Debian and Ubuntu there is a hylafax-client package that provides sendfax which is really what we needed. I installed the package and we tested our app but all we got was failure.. We were greeted with the following:

textfmt: No font metric information found for “Courier-Bold”.
Usage: textfmt [-1] [-2] [-B] [-c] [-D] [-f fontname] [-F fontdir(s)] [-m N] [-o #] [-p #] [-r] [-U] [-Ml=#,r=#,t=#,b=#] [-V #] files… >out.ps
Default options: -f Courier -1 -p 11bp -o 0
Error converting document; command was “textfmt -B -f Courier-Bold -Ml=0.4in -p 11 -s default >’/tmp//sndfaxUSNM632′ <'/etc/issue.net'"

It took a a little bit of poking and prodding to figure this one out but to fix it I had to edit the following file:

/etc/hylafax/hyla.conf

In this file there is a setting for the FontMap. I appended /var/lib/defoma/gs.d/dirs/fonts. You will also need the gsfonts package but that is a listed as a dependency so you should already have it installed.

Bria 3 with Asterisk hanging up after 30 seconds of hold

Do you use Asterisk? Do you also use Bria with it? If so in newer 3.x versions of Bria you may find that callers are being hung up on after 10-30 seconds of being placed on hold. To fix this problem simply Click on the Softphone toolbar, then look for preferences, then to Advance. Next uncheck the Enable Inactivity Timers. The default value for this was 300 seconds. I think that was a typo and meant 300 centiseconds because if when we would adjust this value the time out would increase or decrease. 500 let the caller hold for about 50 seconds.