Saturday, September 23, 2017

Bulletproof Encrypted Cloud Storage

While searching for secure online storage, you'll run across several issues with some providers opening documents, or having a history of using content to generate targeted ads, or outright selling data to advertisers or any other third party willing to pay. Add that to difficult or misleading access controls, or services designed to share data rather than keep it private, it's rather difficult to find a trustworthy cloud storage provider.

So to start with, lets come up with some goals...

  1. Data should be encrypted at the provider
  2. The provider shouldn't have access to the encryption keys
  3. Data needs to be protected in transit

The best solution I've found is to run a virtual machine instance on the cloud provider of your choice. Have the virtual machine expose a block device on the network, this will be the storage used for the data, so make sure to allocate enough for your needs. Restrict access to the network to your client machine. Attach the block device to your client machine, and apply an appropriate disk encryption layer to the block device. This makes sure the data is encrypted, and also keeps the encryption keys away from the provider.

For this case I'm going to use Amazon Web Services EC2. The EC2 instance can run for $3-4 a month, so most of the cost will go for the storage. Now we need to chose a network storage system to share the storage drive. I'm going to use NBD, it's pretty easy to setup, and the 3.x branch has built-in ssl/tls support which we will need later. We could also use iSCSI, to be more platform independent for our client server, but I mostly run Linux so NBD works fine. Unfortunately Amazon's Linux distribution doesn't have the latest nbd package, so we have to install that manually. We also need some SSL/TLS keys, I like the OpenSSL Command-Line HOWTO at madboa.com as a guide.

Amazon EC2 Setup
  • # Install dependencies
  • yum install gcc glib2-devel gnutls-devel
  • # Download the source since amazon has an old 2.x version
  • wget https://sourceforge.net/projects/nbd/files/nbd/3.15.3/nbd-3.15.3.tar.xz
  • tar xJf nbd-3.15.3.tar.xz
  • cd nbd-3.15.3
  • # Source dance
  • ./configure
  • make
  • make install
  • # Create the config file (see below for contents)
  • vi /usr/local/etc/nbd-server/config
  • # Create a symlink for the nbd-server so it shows up in $PATH
  • ln -s /usr/local/bin/nbd-server /usr/bin/nbd-server
  • # Create the init script
  • vi /etc/init.d/nbd-server
  • vi /etc/sysconfig/nbd-server
  • chmod +x /etc/init.d/nbd-server
  • # Start the service
  • /etc/init.d/nbd-server start
  • # Open the firewall for the service
  • iptables -A INPUT -p tcp -s 1.1.1.1 --dport 10809 -j ACCEPT
/usr/local/etc/nbd-server/config
  • [generic]
    • allowlist = true
    • cacertfile = /usr/local/etc/nbd-server/cacert.pem
    • certfile = /usr/local/etc/nbd-server/cert.pem
    • force_tls = true
    • keyfile = /usr/local/etc/nbd-server/key.pem
    • port = 10809
    • tlsonly = true
  • [default]
    • exportname = /dev/xvdf
    • authfile = /usr/local/etc/nbd-server/allow
/etc/sysconfig/nbd-server
  • # Command line options for nbd-server
  • OPTIONS="-C /usr/local/etc/nbd-server/config -l /usr/local/etc/nbd-server/allow"
/etc/init.d/nbd-server
  • #!/bin/bash
  • #
  • # nbd-server This shell script takes care of starting and stopping
  • # nbd (NBD daemon).
  • #
  • # chkconfig: - 58 74
  • # description: nbd-server is the NBD daemon. \
  • # The Network Block Device server (NBD) is used to provide block device \
  • # access for remote clients.
  •  
  • ### BEGIN INIT INFO
  • # Provides: nbd-server
  • # Required-Start: $network $local_fs $remote_fs
  • # Required-Stop: $network $local_fs $remote_fs
  • # Should-Start: $syslog $named
  • # Should-Stop: $syslog $named
  • # Short-Description: start and stop nbd-server
  • # Description: nbd-server is the NBD daemon. The Network Time Protocol (NBD)
  • # Description: nbd-server is the NBD daemon. The Network Block Device server \
  • # (NBD) is used to provide block device access for remote clients.
  • ### END INIT INFO
  •  
  • # Source function library.
  • . /etc/init.d/functions
  •  
  • # Source networking configuration.
  • . /etc/sysconfig/network
  •  
  • prog=nbd-server
  • lockfile=/var/lock/subsys/nbd-server
  •  
  • start() {
      [ "$EUID" != "0" ] && exit 4 [ "$NETWORKING" = "no" ] && exit 1 [ -x /usr/local/bin/nbd-server ] || exit 5 [ -f /etc/sysconfig/nbd-server ] || exit 6 . /etc/sysconfig/nbd-server
    •  
    • # Start daemon.
    • echo -n $"Starting $prog: " daemon $prog $OPTIONS RETVAL=$? echo [ $RETVAL -eq 0 ] && touch $lockfile return $RETVAL
  • }
  •  
  • stop() {
      [ "$EUID" != "0" ] && exit 4 echo -n $"Shutting down $prog: " killproc $prog RETVAL=$? echo [ $RETVAL -eq 0 ] && rm -f $lockfile return $RETVAL
  • }
  •  
  • # See how we were called.
  • case "$1" in start) start ;; stop) stop ;; status) status $prog ;; restart|force-reload) stop start ;; try-restart|condrestart) if status $prog > /dev/null; then stop start fi ;; reload) exit 3 ;; *) echo $"Usage: $0 {start|stop|status|restart|try-restart|force-reload}" exit 2 esac

Now on the the encryption details, we need to figure out how we want to encrypt the data being sent to the cloud. I'm going to use cryptsetup on a gentoo server to access the cloud storage. You will need to refer to your distribution for installing nbd on your client. Check for nbd, or nbd-client in your package manager. Also, It's important to make sure you have some network encryption between the two servers. nbd-3.x has built in ssl/tls support, but if you are planing on using a different system, make sure you have some ssl/tls or even ipsec encryption in place. Next we need to choose the encryption method. I've chosen to go with twofish in xts mode for the encryption, this is a common setup for local disk encryption. One important distinction is that while some encryption methods are fine for local storage, we are transferring data over untrusted channels rather than a local pci/sata bus. In this case xts has an issue with data manipulation in transit, which probably isn't a huge concern over a local pci/sata bus, but transferring data over the internet it becomes a major issue, which is why the ssl/tls support is so important. If you plan on choosing a different cipher, be sure to research it before hand. The other thing to note, is I'm not using LUKS. LUKS has a standard header defined which can be detected by anyone attempting to read the disk. It also stores the master key that's used to encrypt the drive in the header. The good news is the master key is also encrypted by the user password, but this would still be vulnerable to an offline attack by anyone who can read the header. Going with plain mode, the contents of the disk will appear more random as there is no header or format for the data. The bad news is this makes it less portable between systems. I mostly run linux based systems, so the portability isn't an issue.

Now that we have a chosen encryption method, and the tools installed, time to setup the drive on the client server. For this I'm using a simple script to manage everything. I recommend running these commands manually while setting things up to debug any issues.

/root/nbd-crypt.sh
  • #!/bin/bash
  • #
  • # nbd-crypt - setup and tear down a nbd encrypted device
  •  
  • case "${1}" in
    • mount)
      • nbd-client -N default -certfile /etc/nbd/cert.pem -keyfile /etc/nbd/key.pem -cacertfile /etc/nbd/cacert.pem nbd.server.com 10809 /dev/nbd0
      • cryptsetup open --type plain --hash=sha512 --cipher=twofish-xts-plain64 /dev/nbd0 nbdcrypt
      • mount /dev/mapper/nbdcrypt /mnt/nbdcrypt
      • ;;
    • umount)
      • umount /mnt/nbdcrypt
      • cryptsetup close nbdcrypt
      • nbd-client -d /dev/nbd0
      • ;;
    • *)
      • echo "Usage: $0 {mount|umount}"
      • exit 1
  • esac

I've been using this setup to backup my local git repositories for a while now and it's been working great.

Monday, September 23, 2013

Generating Random Passcodes / PSK

This blog is basically a response to a recent diary post at the SANS Internet Storm Center. Rob VandenBrink's post on How do you spell "PSK"? offers up a nice block of code to generate pre-shared keys in python. This got me thinking of the various ways I've used to generate random passwords or pre-shared keys. There are a lot of good tools that do this for you. I like apg as it has a nice "pronounceable" mode to generate strings that could be spoken easier.

The first one is a simple binary pre-shared key generator. It will output HEX strings based on the byte size of the data. Since pre-shared keys are often used in crypto, this is the only example I'm giving that uses /dev/random instead of /dev/urandom.

  • dd if=/dev/urandom count=1 bs=24 2>/dev/null | xxd -p
To change the size of the key, alter the block size (bs) parameter for the dd command.

This next one is a simple perl example that I've used from time to time. This one takes two parameters for the minimum and maximum lengths that you want to generate the strings with. This matches Rob VandenBrink's example in that it only generates alpha-numeric strings.

generate.pl
  • #!/usr/bin/perl -w
  •  
  • use strict;
  • use Getopt::Long;
  •  
  • my $min;
  • my $max;
  •  
  • GetOptions('min=s' => \$min, 'max=s' => \$max );
  • if(!$max){ $max = shift(@ARGV); }
  • if(!$min){ $min = shift(@ARGV); }
  • if(!$max){ usage(); }
  • if(!$min){ $min = 1; }
  •  
  • if($min =~ m/[^0-9]/ || $max =~ m/[^0-9]/){ usage(); }
  • if($min > $max){ usage(); }
  •  
  • my $len = $min + int( rand($min - $max) );
  • my $exp_pass = join('', map { ("A".."Z", "a".."z", 0..9 )[rand 62] } 1..$len);
  •  
  • print "$exp_pass\n";
  •  
  • sub usage {
    • print "Usage: generate.pl [-max] INTEGER [[-min] INTEGER]\n";
    • exit(1);
  • }

The last script is a bash command that I use the most often. It's probably the worst example code wise, but it's very simple and seems to generate the most random strings. Basically it reads a 1k block from /dev/urandom and removes all the unwanted characters. Similar to the previous example, it will take up to two parameters for min/max length, but it will run best without any parameters. This command will generate strings with symbols in them, so you'll have to watch where you use them. It still amazes me how often I run across documentation that has no mention on how to escape symbols or special characters in their authentication parameters.

generate.pl
  • #!/bin/bash
  •  
  • exec 2>/dev/null
  • MIN=0
  • test -n "$1" -a $1 -gt 0 && MIN=$1
  • MAX=100
  • test -n "$2" -a $2 -gt $MIN && MAX=$2
  •  
  • PW=""
  • until [ ${#PW} -gt $MIN -a ${#PW} -le $MAX ]; do
    • PW=`dd if=/dev/urandom bs=1k count=1 2>/dev/null | sed -e 's/[^-a-zA-Z0-9<>.,;:=+]*//g' | head -n 1`
  • done
  • echo "$PW"

I hope these examples will help other people. They've been useful for me. When I need a new password generated, I will usually run one of these multiple times and pick one out of the list.

Friday, July 12, 2013

Gumstix + Tobi-Duo Gateway Router

This post has been a long one. I have to confess that I jumped into the deep end on this one, which was the plan, but I didn't know how deep this would go.

In my quest to downsize several older components on my home network, I came to the firewall/router. Currently I am running a dual Zeon Gentoo Linux server to handle all my server needs in one system, including routing. Having moved all the services off of the server, all that was left was it's routing/firewall setup. I decided the Gumstix based COM devices would make a good fit with significantly less heat issues and minimal power requirements. This would also be a good learning experience, being my first real workings with embedded ARM.

So here's the configuration I'm going with. The Janus module was a later add-on, and isn't required. Add in the power adapter and a 8GB MicroSD card and you'll total about $200.

The first major road block that I ran into was the lack of documentation. Originally I was planning on loading the system with either a Gentoo based system, or a Buildroot system. The online documentation mentions that Buildroot is no longer supported, but no details are given for why. It also mentions that Yocto is the only supported build environment.

The Overo systems come with some NAND Flash pre-loaded with a working distribution, unfortunately there's very little online for what capabilities exist in the built-in system. Since the core COM board is designed to be attached to very different expansion modules, I understand why it didn't start the network on boot. Since I didn't start with a Janus module, I was relying on getting the network cards up and accessing the system through there. So I was starting out running blind.

The next option was to follow the Gumstix Getting Started Guide to get a bootable MicroSD Card using their prebuilt images. Even this caused issues due to the documentation. If you follow their guide, you will notice they refer to u-boot.bin, but the prebuilt images contain a u-boot.img, if you copy u-boot.img to u-boot.bin then the system won't boot. I'm assuming the "u-boot.bin" is just a typo, but it appears in the guide 6 times, maybe a copy/paste mistake. So after fixing that little issue, the system boots and I see dhcp requests from a "00:15:C9" MAC prefix. So a quick power off to copy some ssh authorized keys, and a boot later and I can ssh into the system.

Now that I'm in the system, I can dig around a bit and see what this thing can do. Unfortunately the prebuilt images don't provide what I need in a router. The kernel is missing some networking features that I need, IPsec support, IPv6 is only partially implemented, and no NAT. The system is also missing some packages that I need, iproute2, ipsec-tools being the most important. The other thing that I really didn't like is the full dbus/udev/systemd setup. Personally, I think this is extreme overkill for such a small system. So now I need a new kernel, and a new rootfs.

This is where the bulk of my failures accumulated. Between the documentation being misleading, or just out of date and the same steps don't work anymore, I believe I have 8-12 half dead builds of various kernel/rootfs images lying around. I'm going to skip over the failures and show what worked. First we will start with the getting the Gumstix Yocto build system up and running. Most of the steps are from the Gumstix Wiki and the Gumstix Github Repo. I highly recommend running all this in both screen and script so you don't miss anything.

First we need the repo command. This appears to be a git wrapper of sorts.

Get repo...
  • curl https://dl-ssl.google.com/dl/googlesource/git-repo/repo > repo
  • chmod a+x repo
  • sudo mv repo /usr/local/bin
  • repo --help

Now we can use the repo command to clone the Yocto build environment.

Get Yocto...
  • mkdir yocto
  • cd yocto
  • repo init -u git://github.com/gumstix/Gumstix-YoctoProject-Repo.git
  • repo sync

Now we can build the default kernel. This step will take a while, and depending on your build host software it can fail at several points. I recommend going over to the Yocto Project Reference Manual and make sure you have the required build packages installed before continuing. Another warning, other packages may also cause issues. I'm using Gentoo as my build host, and I ended up running into an issue with sys-devel/make-3.82-r4, and I had to downgrade to <sys-devel/make-3.82 to complete the kernel build.

  • TEMPLATECONF="meta-gumstix-extras/conf" source ./poky/oe-init-build-env
  • bitbake virtual/kernel

It's pretty much clear sailing from here. The next part is just configuring the kernel and re-building. I've uploaded my Gumstix kernel .config file so you can use that as a reference. The trick to this part is the documentation on the Wiki is using an older 2.6 kernel, where in my case it's a 3.5 kernel. So if the kernel changes, you may need to change where you copy your custom .config file for the re-build. Use the provided find command to narrow down your search if the location changes.

Reconfigure and build the new kernel
  • bitbake -c menuconfig virtual/kernel
  • #Find your new custom .config file
  • find ./tmp/work/ -path "*/git/.config"
  • #Find your new custom .config file
  • find ../poky/meta-gumstix/recipes-kernel -name "defconfig"
  • #Copy the .config over to the defconfig
  • cp ./tmp/work/overo-poky-linux-gnueabi/linux-gumstix-3.5-r0/git/.config ../poky/meta-gumstix/recipes-kernel/linux/linux-gumstix-3.5/overo/defconfig
  • bitbake -c clean virtual/kernel
  • bitbake virtual/kernel

Your new kernel will be ./tmp/deploy/images/uImage-*. I recommend making a backup of the kernel and the .config for it so you have it for reference later in case you need to update or change kernel features later on. Also, make sure you compare your kernel to the live running system with the prebuilt kernel. In my case, I made the feature changes that I needed, but then also did a significant amount of cleanup and removal of any features that I can't use. I also enabled CONFIG_DEVTMPFS since I don't really have any way to modify the hardware once the system is booted. The live system wasn't running any modules in the kernel, so I also removed all modules and module support. The benefit for this is now my kernel is completely independent of my rootfs.

Now to follow Gumstix Create a Bootable MicroSD Card guide to setup the MicroSD card.

Setup the MicroSD card
  • sudo mkfs.vfat -F 32 /dev/sdd1 -n boot
  • sudo mke2fs -j -L rootfs /dev/sdd2
  • mkdir boot rootfs
  • sudo mount /dev/sdd1 boot
  • sudo mount /dev/sdd2 rootfs
  • #Download the prebuilt images
  • wget http://yocto.s3-website-us-west-2.amazonaws.com/Releases/2013-06-16/overo/dev/MLO-overo-2012.10
  • wget http://yocto.s3-website-us-west-2.amazonaws.com/Releases/2013-06-16/overo/dev/u-boot-overo-2012.10-r0.img
  • #According to the guide: "For Overo COMs only: MLO (the second-stage bootloader binary) must be copied first"
  • sudo cp ./MLO-overo-2012.10 boot/MLO
  • sudo cp ./u-boot-overo-2012.10-r0.img boot/u-boot.img
  • sudo cp ./tmp/deploy/images/uImage boot/uImage

I've worked with Buildroot already, so I'm sticking with it for the rootfs. This will help to produce a much more streamlined system. I've detailed out the important parts here, but I've also uploaded my buildroot .config file so you can use that as a reference. Here's the major config options that need to be set

  • BR2_ARCH="arm"
  • BR2_ENDIAN="LITTLE"
  • BR2_GCC_TARGET_TUNE="cortex-a8"
  • BR2_GCC_TARGET_ARCH="armv7-a"
  • BR2_TARGET_GENERIC_GETTY_PORT="ttyO1"
  • BR2_ROOTFS_DEVICE_CREATION_DYNAMIC_DEVTMPFS=y
The rest is mostly package selection, which you will need openssh, iproute2, and iptables. I would recommend radvd, openntpd, dhcp server and client. Some form of VPN support would be good too, I usually go with ipsec-tools, but openvpn is also good. The other packages I will leave up to you as they are mostly user preference.
Configure buildroot
  • tar xjf buildroot-2013.02.tar.bz2
  • cd buildroot-2013.02
  • make menuconfig
  • make
  • tar xf output/images/rootfs.tar -C ../rootfs
  • cd ..

Before we umount the rootfs, lets make some changes to how things will run. We will need to start the network interfaces, and setup some firewall routing rules. I'll only be going through the minimal firewall setup for a gateway router, you should tailor it to your needs. You will also need to change the IPs, I'm using rfc 5737/3849 addresses for documentation purposes, so they will need to be changed for your network.

Filesystem Changes
  • #First thing is to add your ssh public key to the /root/.ssh/authorized_keys
  • cp ~/.ssh/id_rsa.pub rootfs/root/.ssh/authorized_key
  • #We will need to set up the network interfaces, so lets change the startup script
  • vi rootfs/etc/init.d/S40network # see contents below
  • #We will also setup a simple forwarding firewall, I recommend tailoring this for your own needs
  • vi rootfs/etc/init.d/S35firewall # see contents below
  • sudo umount boot
  • sudo umount rootfs
Insert the flash card into the Gumstix COM, connect up the network cables, and plug in the power. You should be up and running with a nice and efficient router. Depending on your package selection, this could be a good security monitoring device, or a man in the middle device on a pen-test. You should be able to run it off a battery as well.


The following are copies of configurations and scripts that were used above for reference.

/etc/init.d/S40network Contents
/etc/init.d/S35firewall Contents

Monday, April 15, 2013

Handy IPv6 Scripts

During my IPv6 deployments I've built up a collection of commands to handle some issues with addressing. I've set some of the more common ones up as bash aliases in my "~/.bashrc" for easy access. As you're deploying your own networks, you might find these useful.

Scanning techniques in IPv6 tend to rely on lazy administrators. This can be seen in using sequential addressing (::1, ::2, ::3, ...), using IPv4 addresses to determine IPv6 addresses, or just accepting SLAAC generated addresses. All of these can be predictable and can greatly reduce an attacker's time to scan and identify hosts on your network.

Manually configuring randomly generated addresses can keep that scan time in that "unfeasible" category. This first command will generate a random IPv6 suffix.

Generate a random suffix
  • alias ipv6-randip='dd if=/dev/urandom bs=8 count=1 2>/dev/null | od -x -A n | sed -e "s/^ //" -e "s/ /:/g" -e "s/:0*/:/g" -e "s/^0*//"'
Assuming your ipv6 prefix is 2001:DB8:a5cd:e431::/64, you can use the 'ipv6-randip' command alias to generate a random suffix to configure for a server or workstation.

The problem with random addresses is they are a pain to remember. Unless you're only dealing with a hand full of hosts, your address space is going to be impossible to keep track of without relying on other tools. The good news is that those tools are so numerous that I could never list all the possible ways of doing it. But even with those tools, there's a lot of configurations and settings that need to be dealt with. Probably the most common tool to keep track of these is going to be your DNS server. After all, you need to configure DNS for your hosts there anyway.

The first hurdle you'll hit when configuring your DNS is reverse lookup. This can be really annoying. Thankfully, this next command will take some of the annoyance out of setting up all those PTR records.

Convert an address to it's reverse lookup "ipv6.arpa" name
  • alias ipv6-arpa='sed -e "s/:/:0000/g" -e "s/:0*\([0-9a-f][0-9a-f][0-9a-f][0-9a-f]\)/:\1/g" -e "s/://g" | rev | sed -e "s/./&./g" -e "s/$/ip6.arpa/"'
Now when you need to set up your reverse lookup addresses, all you need to do is "echo 2001:DB8:a5cd:e431:de03:b3c1:e425:9d4f | ipv6-arpa" and out comes your "9.3.4.a.a.8.1.a.9.5.1.7.6.3.0.8.1.3.4.e.d.c.5.a.8.B.D.0.0.0.0.1.0.0.2.ip6.arpa" PTR.

I have a tendency to build up a backlog of PTRs to generate so I can do multiple at the same time. So in that case I start up a terminal and run the following line.

  • read X;while [ -n "$X" ];do echo "$X" | ipv6-arpa; read X; done
Then just copy my addresses and paste them into the terminal to generate the PTR. It's a handy loop construct that I use whenever I need to do a bunch of "one-liners" for something.

The last thing to consider when using random addresses in IPv6 is to keep track of your MACs. Every once in a while you'll need to look up a host on based on it's MAC or link local address. The next commands will take a MAC address and convert it to it's SLAAC equivalent, and convert a SLAAC back to it's MAC address.

Convert a MAC Address to Stateless Address Auto Configuration
  • alias ipv6-mac2slaac='perl -e "\$_=lc<>;\$_=~s/[-:]//g;\$_ =~ m/^(..)(..)(..)(..)(....)/;printf(\"%02x%s:%sff:fe%s:%s\n\", ((hex \$1)|0x02), \$2, \$3, \$4, \$5);"'
  • alias ipv6-slaac2mac='perl -e "\$_=lc<>;\$_=~m/(..)(..):(..)ff:fe(..):(..)(..)/;printf(\"%02x:%s:%s:%s:%s:%s\n\",((hex \$1)^0x02),\$2,\$3,\$4,\$5,\$6);"'
So "echo 00:1e:2a:39:77:ba | ipv6-mac2slaac" will give "021e:2aff:fe39:77ba", and "echo 021e:2aff:fe39:77ba | ipv6-slaac2mac" will give "00:1e:2a:39:77:ba".

Hopefully these will save you some time and headache with your IPv6 deployment.

Tuesday, April 9, 2013

Wifi Autoscaning w/ Raspberry Pi and Kali Linux

I love Raspberry Pis. I currently have 2 Model B rev 2s, an original Model B rev 1, and now a Model A. One of the project's that I'm really looking forward to doing is working with the Model A as a battery powered device for wireless penetration testing, and as a mobile tool for reconnaissance. With it's credit-card size and low power requirements, the Raspberry Pi Model A is perfectly suited for the job.

Here's the main components for my layout (you should be able to purchase the major components for less than $120):

  1. A Raspberry Pi Model A complete with Pibow ModelA Case. You can order the set here.
  2. An Alfa Network 802.11b/g/n Long Range Wireless USB Adapter
  3. 2.4GHz 20DBi High Gain WIFI Directional YAGI Antenna
  4. 12000mah External USB Battery Pack
  5. 8GB SD Flash Card recommended, minimum 4GB
  6. You will also need the necessary USB cables, keyboard, and a display for the initial setup. Since these are not part of the final rig, and are custom requirements depending on your lab, I'll assume you can handle them yourself.
Note: The antenna and wireless card are a little over-powered for this setup, but I wanted to see what the rig would do given the most power hungry components I had. Using a mini-usb wireless adapter would be much better if you don't need the long range. In my test runs of this setup I ran the system for 12 hours to see how the battery would hold up, battery showed one bar and was good for another few hours.

Your first step is to install the default Kali Linux Raspberry Pi image to your SD Card. The instructions are available on the Kali Linux Documentation site.

After you have the SD Card loaded with the image, fire up your favorite terminal and run "fdisk" on the device. Create a new partition out of the remaining space on the card. Once that is done, run "mke2fs -t ext4" on the new partition to format it. Now we could have been fancy and grown the Kali Linux partition to use the full device, but I prefer to have a separate partition. This prevents the packet capture files from filling the entire card and possibly causing problems for the Kali Linux installation. Since we are basically running a headless capturing system, we won't know how much data it will sniff off the air until we get it back to our lab.

Now we will need a script to automatically start airmon-ng and airodump-ng on boot. I've written a quick bash script for this, with an easy config file to tailor it for each engagement. Feel free to use these and tailor them to your needs, they have worked for me so far.

/root/autoscan
  • #!/bin/bash
  • #
  • # autoscan - simple auto scanner for kali linux on raspberry pi
  •  
  • source /root/.autoscan.cfg
  •  
  • airmon-ng start ${AIRMON_DEV}
  •  
  • while [ ! -e "/tmp/.autoscan.stop" ]; do
    • airodump-ng -w ${STORAGE} ${AIRODUMP_OPTS} ${AIRMON_MON} > /dev/null 2>&1 &
    • PID=$!
    • sleep ${RUN_TIME}
    • kill ${PID}
    • FS="$(df `dirname ${STORAGE}` | tail -n1 | awk '{print $4}')"
    • test ${FS} -lt ${SAFETY_NET} && touch "/tmp/.autoscan.stop"
  • done
  •  
  • airmon-ng stop ${AIRMON_MON}

The script is pretty basic. We start out importing the config file and starting the wireless card in monitor mode. Then we enter a loop to capture our packets. The script will sleep for a period of time before killing the airodump-ng process. Then before starting the next iteration, the script will verify that there is a safe amount of space on the storage partition. If the partition gets too full, it will trigger the script to end.

Next up is the configuration file.

/root/.autoscan.cfg
  • #!/bin/bash
  • # autoscan config file
  •  
  • # This is your wireless device, probably wlan0 unless you have a
  • # more advanced setup
  • export AIRMON_DEV="wlan0"
  • export AIRMON_MON="mon0"
  •  
  • # Pass these extra parameters to airodump-ng
  • # (see "man airodump-ng" for info)
  • export AIRODUMP_OPTS="-c 6"
  •  
  • # Where to store the packet files, this is the full path plus
  • # the prefix
  • export STORAGE="/root/store/auto"
  •  
  • # Split packet capturing into multiple files.
  • # Every scan will record for this number of seconds before
  • # starting a new scan.
  • export RUN_TIME="900s"
  •  
  • # Do not allow scanning to consume the entire disk.
  • # Do not start another scan if there is less than SAFETY_NET
  • # space left (in k).
  • export SAFETY_NET=100000

A couple notes on the configuration. Make sure you change the "AIRODUMP_OPTS" variable to suit your needs. You may want to add some "-o" output formats if space is an issue.

All that's left is to mount the extra storage, and start the script on boot. For ease of use I'm going to just put both into "/etc/rc.local". I've mounted the storage partition that we created earlier to "/root/store", if you place it somewhere else on the system make sure you update the /root/.autoscan.cfg to point to the correct location.

/etc/rc.local
  • #!/bin/sh -e
  • #
  • # rc.local
  •  
  • mount /dev/mmcblk0p3 /root/store
  • /root/autoscan > /dev/null 2>&1 &
  • exit 0

Now that the SD Card is prepared, it's time to fire up the Raspberry Pi. Once the system is up, log in as "root" with the default password of "toor". Don't forget to change it! Then verify that the script is running and working.

One thing I've noticed in testing, the wlan0 interface didn't always come up, and the airmon-ng command would fail. Fortunately the wireless card activity light gives me a clue to this happening, so it's not a big deal. Restarting the Raspberry Pi fixes it.

Thursday, April 4, 2013

Features or Bloat?

A recent post on the Errata Security Blog describing the Ubuntu low-mem install for VMs got me thinking on something that's bugged me for a while now. First off, kudos to Ubuntu for identifying the need for this feature and providing this option. But this begs the question, what is in that 420MB of storage and ~510MB of RAM usage? I can't wait to take this for a test drive and compare default systems. I wonder how many of those background services are really needed, and how many are just fluff. What is the security risk of all that fluff?

For those who have not caught on yet, the name of this blog is "Scalable". Look it up. A lot of time, people in IT don't understand the power of scaling DOWN. It reminded me of a tweet that I've kept in my favorites:

I'm a Gentoo Junkie myself. Being a programmer by trade makes using Gentoo, and more specifically the Portage system, very powerful and flexible. There's just no better feeling than taking a system image, performing a complete update, and finding out that I still have the same functionality and the image has dropped by 100MB. This lowers my support time/costs. There's fewer things that can go wrong with the software being there's less pieces to break. The system tends to perform better. And I'm reducing the attack surface of the system.

This is always the first issue with a security audit. Find the services and software on the system that you don't use/need and disable them or uninstall them completely. A lot of Linux Distributions are very flexible with package installations. But I'm always shocked by desktop oriented distributions, like Ubuntu. It's hard to trust a system when a 'pstree' scrolls for three screens.

Sunday, March 24, 2013

Hello World!

Well, it finally happened. I got a blog. God save us all.

This will be a dumping ground for personal tech projects, a few interests and hobbies, security rantings, and worst of all my opinion. You have been warned.