Rocu.de

Love, caffeine and omlette

Month: January 2013 (page 1 of 2)

Book review: Linchpin - Are You Indispensable?

Cover of Linchpin

I just finished reading "Linchpin: Are You Indispensable?" from Seth Godin. The book really strikes a chord with me. Essentially it tells you why its a good idea to not only do your "job". Why you should care to be an artist and why you should engage much more. It suggests that you have much more to offer, then just being a cooperate drone.

It also talks about fear and hesitation. And why you should not listen to this kind of thoughts -
just act on your ideas.

I couldn't agree more! Every other way of working is a waste of your time.

I enjoyed reading the book, because I had to stop and think about what I can contribute.

Highly recommended.

Productivity: Mute tweets without urls in Tweetbot

TweetBot mute filter

I spent far to much time on Twitter lately. Time wasted that should be
used for building stuff. On the other hand I do not
want to complettely abandon Twitter, because it's a great source of
new links for me.

So I decided to filter out all Tweets without URL's.

I remembered that it was possible using negative lookahead - but frankly
I never used it. I found the answer on stackoverflow.

Just add the following filter and you are set:

^((?!.[a-z]+/).)*$

Feels much better - and I can follow more great people 😉

Kata: FizzBuzz. 6th try.

I like the FizzBuzz kata a lot. This is my 6th try.

Code

class FizzBuzzGame
  attr_reader :range

  def initialize(range)
    @range = range
  end

  def play
    range.map { |round| answer_for_number(round) }.join("n")
  end

  def answer_for_number(number)
    if (number % 15).zero?
      'fizzbuzz'
    elsif (number % 3).zero?
      'fizz'
    elsif (number % 5).zero?
      'buzz'
    else
      number.to_s
    end
  end
end

Tests

describe FizzBuzzGame do
  it 'answers 1 in the first round' do
    answer_for(1..1).should == '1'
  end

  it 'answers fizz in the third round' do
    answer_for(3..3).should == 'fizz'
  end

  it 'answers buzz in the fith round' do
    answer_for(5..5).should == 'buzz'
  end

  it 'answers fizz for every number divisible by 3' do
    answer_for(9..9).should == 'fizz'
  end

  it 'answers buzz for every number divisible by 5' do
    answer_for(25..25).should == 'buzz'
  end

  it 'answers fizzbuzz for every number divisible by 5 & 3' do
    answer_for(90..90).should == 'fizzbuzz'
  end

  it 'prints every answer in a new line' do
    answer_for(98..100).should == '98nfizznbuzz'
  end

  def answer_for(range)
    FizzBuzzGame.new(range).play
  end
end

Documentation

FizzBuzzGame
  answers 1 in the first round
  answers fizz in the third round
  answers buzz in the fith round
  answers fizz for every number divisible by 3
  answers buzz for every number divisible by 5
  answers fizzbuzz for every number divisible by 5 & 3
  prints every answer in a new line

Thoughts

I like this implementation a lot. Using a range makes for a nice
interface and it's nearer on the specifications:

Write a program that prints the numbers from 1 to 100

1
2
Fizz
4
Buzz
Fizz
7
8
Fizz
Buzz
11
..

So I would argue that it's definitely better then to start with only one
function, that you just pass a number.

PAM and the mysteriously ignored open files limit

A few day ago we received a ton of error notifications that looked like
this:

(RSolr::RequestError) "Solr Response: Lock obtain timed out.

The system ran quite nice for quite a long time - so what was wrong?

It took me quite a while to find the problem. Solr was exceeding the open files limit, so it locked to prevent
problems.

But how the hell did it hit the limit? It's increased on all of our servers. But when I checked it was not applied.

sudo su USERNAME
ulimit -a
..
=> open files                      (-n) 1024
..

The problem was, that I manually restarted Solr a few days earlier, using a rake task. I did not use sudo but su.

If you use su the users open file limit will not be applied automatically. Have a look into the PAM configuration for su and guess why..

#Sets up user limits, please uncomment and read /etc/security/limits.conf
# to enable this functionality.
# (Replaces the use of /etc/limits in old login)
# session    required   pam_limits.so

The required line is commented our.

But even more surprising the file limit is activated in most of the other configuration files like /etc/pam.d/sudo. So the problem was really me using su to restart Solr.

What I ended up doing was adding this file to chef, with the comment removed. I don't like this kind of inconsistency.

Command line tapas: Feel like a hacker with cluster ssh

A few months ago I paired on a chef recipe with a colleague. After we
uploaded the cookbook we wanted to try this stuff out on a few nodes.

Until then I would have kind of done it completely manually. But he
entered a command and the following happend:

ClusterSSH in action

Wow. He just ssh'ed into all of these servers. The next part looked even
cooler. He started typing and magically the text appeared in all the
terminals. He pressed enter and the chef run started. It looked awesome. Chef runs produce quite a lot of text 😉

I was hooked and asked him what he just did. "That's just ClusterSSH", he
replied, "I use it all the time".

ClusterSSH is a tool for making the same change on multiple servers at
the same time. The 'cssh' command opens an administration console and
an xterm to all specified hosts. Any text typed into the
administration console is replicated to all windows. All windows may
also be typed into directly.

For the Mac there is a similar tool called csshx thats what I ended
up using.

It works like a charm. You just specify your clusters in a clusterfile.

cluster1 host1 host2
cluster2 host3 host4

Now you just enter:

csshx cluster1

That's it.

I use csshx during every maintainance window. I love
pair programming. Makes you realize that things that are obvious to
yourself are not obvious to your colleagues.

Command Line Tapas: GoAccess - a neat programm to browse through your access log

Today I want to tell you about GoAccess, a brilliant log file analyzer:

GoAccess is an open source real-time web log analyzer and interactive viewer that runs in a terminal in *nix systems. It provides fast and valuable HTTP statistics for system administrators that require a visual server report on the fly.

You use goaccess like this:

goaccess -a -f access_log

And end up with a report looking like this:

GoAccess main view

You see a list of differnet reports and you can drill down further by selecting a specifig report.

GoAccess detailed report

You can also use goaccess also to use html reports that look like this:

 GoAccess html report

I really use this little gem pretty regullary, especially for my blogs.

To give it a shot use the packet manager of your choice or compile it from source.

Command line tapas: Use Pipe Viewer to count new lines per second

Often I have to have a look at log files to see how a system behaves. For example recently I wanted to find out, how many points of interes per second were imported.

I used to just count manually. But that didn't really scale well.

So this is what I use now:

tail -f poi_log | pv -lr >/dev/null

This command uses PipeViewer to find out how many lines are running through the pipe every second.

What I really like about this is, that you can drill down even further if you use tools like grep. For example its trivial to find out how active bing is on your site.

 tail -f access.log | grep "bingbot" | pv -lr -i 10 >/dev/null
 => [3.97/s ]

Before you give it a shot you have to install PipeViewer.

Use

apt-get pv

on Ubuntu/Debian or

brew install pv

on a Mac.

I hopy you like pv as much as I do. It's very simple & focused and perfect in everyones toolbox.

My home is my castle - How to manage and share your dotfiles using homesick


Castle 11 from Bill Ward's Brickpile. (CC BY 2.0)

Since quite a while I try to find a good solution for sharing my dotfiles between my different computers.

I started with chef. - Unfortunately this did not work so well. Especially if you also want to use your dotfiles on a server. Also it feelt a little complicated.

The next approach I took was putting my dotfiles into my Dropbox and symlink them. This worked ok. But Dropbox on a server? No way.

So I keept looking. A few days ago I stumbled over
homesick - an amazing rubygem that manages your dotfiles.

How to set it up

First install homesick:

gem install homesick

Now create a "castle". A castle is a collection of dotfiles.

homesick generate ~/dotfiles

This creates a git repro with a /home folder inside. Just put your dotfiles / dotfolders in there. Then push the stuff to Github.

Now install the castle with homesick.

homesick clone https://github.com/foo/dotfiles.git

Now let homesick do its magic.

homesick symlink dotfiles

All the dotfiles from within this castle will be symlinked.

This is awesome!

  • You feel at home at each of your servers in no time
  • You share your dotfiles with the world
  • You can learn from others and refine your files
  • You can put essential little scripts into a .bin directory.
  • The castle concept allows you to try someone elses dotfiles

A word of caution

Please do not put sensitive informations into you dotfiles.
Especially ssh keys, bash histories and passwords in configuration
files do not belong on Github.

Try it!

Seriousely. It's magical. This is one of those things that you try out and you ask yourself how you could possibly live without it.

Interested? Have a look at my dotfiles.

Command line tapas: Write your commands in an editor

I really hate to write long commands in the shell. But luckily there's a command for that.

Type:

fc

and your editor will open with the last command you executed.

Edit, save and leave the editor. The command will be executed
automatically.

I use this little trick frequently. Hope you like it as well..

How to deploy Octopress on a Raspberry Pi

The Raspberry PI is a really inexpensive little computer. It has an ARM
processor, 512 MB of RAM and it runs Linux. I use it to host my website
at home. It's a perfect fit for Octopress.

In this article I explain, how you I set it up. I assume that its pretty
simple for everyone using Octopress of course 😉

Bootstrapping the Raspberry Pi

First download Raspbian. It's a variant of Debian, that is optimized for
the Raspberry Pi.

curl http://www.mirrorservice.org/sites/downloads.raspberrypi.org/images/raspbian/2012-12-16-wheezy-raspbian/2012-12-16-wheezy-raspbian.zip

Unzip it:

unzip 2012-12-16-wheezy-raspbian.zip

And then transfer it onto a SD-Card.

sudo dd bs=1m if=2012-12-16-wheezy-raspbian.img of=/dev/rdisk2

Now put the SD-Card into your Raspberry Pi and boot.

Afterwards you should change some defaults in the configuration tool for
your RaspberryPi.

sudo raspi-config

/images/uploads/2013-01/pi.jpg

  • Change the memory split to 16 MB (your server does not need fancy
    graphics)
  • Expand the root partition to fill your SD-Card
  • Change the password for the pi user

Reboot:

sudo reboot now

Time for SSH

Now is the time to use your bigger computer and SSH into your
Raspberry Pi.

/images/uploads/2013-01/ifconfig.jpg

Look up the ip for eth0. In my case it's 192.168.2.102. Another possibility is to use the Web Interface of your router. Usually you should see the IP there as well.

Now log into your pi.

ssh pi@THEIP

The webserver

We need a webserver, so lets install one.

sudo aptitude install apache2

Check if it works - enter the IP of your Raspberry Pi into your
Webbrowser. You should see It Works.

Rsync

Octopress uses rsync in order to deploy its files. You can install it with:

sudo aptitude install rsync

Octopress

Now its time to configure octopress.

cd octopress
vim Rakefile

Change the Rsync options to:

## -- Rsync Deploy config --
ssh_user       = "pi@THE_IP_OF_YOUR_RASPBERRY_PI"
ssh_port       = "22"
document_root  = "/var/www/"
rsync_delete   = false
rsync_args     = ""  # Any extra arguments to pass to rsync
deploy_default = "rsync"

Add the authorized key

Copy your public key into your clipboard. If you do not have one yet, you have to generate a key-pair.

Your key should be stored in ~/.ssh and end with a .pub

ls ~/.ssh/

Copy it into your clipboard.

Then add it to your Raspberry Pi

cd ~
mkdir .ssh
nano ~/.ssh/authorized_keys

Paste your key and save. Then adjust the permissions.

chmod 600 .ssh/authorized_keys

Deploy

We're nearly done now. The webserver runs. You should be able to login
into you Raspberry Pi without a password and rsync is ready to go. So
lets deploy.

cd octopress
bundle exec rake gen_deploy

Now reload the browser again.

/images/uploads/2013-01/raspberry_pi_done.jpg

Tada! You're done. You should see your website on the RaspberryPi now.

Older posts

© 2017 Rocu.de

Theme by Anders NorenUp ↑