Report inadequate content
Expand

Organizing git branches in logical folders

 TAGS:

It is easier to find things when they are well organized. If you are a git user a good practice to name the branches would be to use descriptive names including slashes "/" (as in paths) where everything before the slash is the folder you want to use and then the logical name after it.

If you use the fantastic git graphical interface SourceTree (free for Windows and Mac) then you will be able to navigate these branches using folders.

Of course if you use advanced tools like the git-flow extensions (strongly recommended in large environments) this kind of behaviour is automatically managed.

An example:

Let's say we usually work in branches when we want to release new features. This is a screenshot on how it would look like creating the following branches in SourceTree:

git checkout -b feature/editor
git checkout -b feature/reset-email
git checkout -b devel

 TAGS:

{
}

!About the blog

Harecoded is basically a development oriented blog. The contents here are very basic and tending to be generalistic recipes, some tricks and a little bit of advice. If you are in a startup or start digging into the internet development there are some posts that might be handy.

I usually do not go deep into the issues and my dedication to this blog is barely unexisting. I write randomly when I think something is of general interest, keeping only the very essence.

Historically this blog has been contributed by more people (Sergi Ambel and Manuel Aguilar) but now is managed only by me (Albert Lombarte) , the original owner.

 

If you have questions please use the comments to post any of them! What we write is quite basic but we can help you out with more complex stuff.

Thanks for reading!

Read more about this blog in Obolog

Expand

Automatically archive S3 backups to Amazon Glacier


Amazon S3
is an on-cloud storage service used in a variety of scenarios. On of these common scenarios is the one where you upload your server backups to S3 using any of the multiple convenient libraries and tools. 

In the other hand, Amazon offers another service more oriented to data archiving and backup named Amazon Glacier. If you store a lot of data (and I am not talking about  a couple of GB) then you can save money using Glacier instead of S3. Although the S3 service is sold as "infinitely scalable and highly durable" the bills work the same way and if you store TB than you might consider moving to Glacier.

Now S3 offers an option to automatically move (or delete) data from the S3 buckets to Glacier. It is called "Lifecycle" an you'll find it when logging to the AWS Console and showing the Properties tab.

 TAGS:

From there you can create a rule that when an object is XX days old it will be moved from S3 to Glacier automatically (charges apply), or you can delete it as well.

Now it's time to make numbers, the difference in pricing between S3 and Glacier is huge.

  • Save data in S3 (US Standard): $0.125 per GB up to 1TB. More expensive beyond that space used.
  • Save data in Glacier(US East Virginia): $0.01 per GB / month

Pricing:

Expand

Kill processes using string search

 -

A lot of Linux distributions (and Mac) come with a handy command named pkill installed by default. This command is very useful to kill processes in a more natural way.

Instead of doing a kill/killall based on the ID of the process or the binary name, you can just pass a string that appears in any part of the process list, including the parameters you used to start a service.

To kill the process you only have to type pkill yourstring.

Example:

$ ps -aux
root 32495 1 0 Sep24 ? 00:09:42 /bin/bash /root/deploy.sh
root 31054 1 0 Sep24 ? 00:09:44 /bin/bash /root/deploy.sh
# Kill both bash processes:
pkill deploy

 

Expand

Force kill of processes in Windows

 TAGS:

Sometimes Windows Tasks Manager is not able to kill an in-memory process. We try to close it several times with no luck :(

For these frustration moments we can make use of a console command named TaskKill

With TaskKill the pain ends simply with:

taskkill /IM filename.exe /F

More info about taskkillhttp://technet.microsoft.com/en-us/library/cc725602.aspx

{
}
Expand

A true multiline regexp in PHP. The "I miss U" technique

 - The following regular expression matches tags that are opened and close in different lines, albeit can be used for any other purpose. It is also ungreedy, meaning that when the first closing tag is found the rest of equal tags will be ignored.

It is very easy to remember and to apply, I call it the "I MISS YOU" technique, see the why in the regexp modifiers: misU 

$html =<<<MULTILINE
<p class="interesting">I am the <strong>interesting</strong> text</p>
<p>But this should be ignored</p>
MULTILINE;

$open = preg_quote( '<p class="interesting">' );
$close = preg_quote( '</p>' );

$pattern =  "~$open(.+)$close~misU";
preg_match_all( $pattern, $html, $matches);
var_dump( $matches[1] );
die;
// Displays array(1) { [0]=> string(42) "I am the <strong>interesting</strong> text" }

And the "I miss you" technique is because misU means:

  • m: Multiline modifer (even the "s" modifier actually does that)
  • i: Case insensitive 
  • s: That's the important one, matches all characters including newlines
  • U: the ungreedy (must be uppercase, lower case is for utf-8)

Note: The "Us" modifier would be enough for this specific example, but less prosaic.

Easy to remember both the "Us" or the "misU", happy scrapping!

Expand

Finding abusers. List of most frequent IPs in Apache log

Internet is full of malware and people with leisure time who will hammer your server with no good intentions, most of them will try to access well-known URLs looking for exploits of software like Wordpress (/wp-admin.php, /edit, etc...).

If you monitor for a while your access_log it's easy to find out unwanted behaviour. If you want to get a list of the most frequent IPs in your Apache log the following command will get a list of those IPs sorted by number of requests:

[root@www3 ~]# cat /var/log/httpd/access_log_20130620 | awk '{print $1}' | sort | uniq -c | sort -rn | head
 912545 95.27.xx.xx
  85151 66.249.78.72
  70448 66.249.78.139
  59450 95.27.40.10
  49649 178.121.54.212
  48295 91.203.166.250
  37894 157.56.92.165
  37028 157.56.92.152
  36094 157.56.93.62
  20707 157.55.32.87

Many of these IPs are bots like Google (66.249.xx.xx) or MSN (157.56.xx.xx) and they should be let in and out at their will. But as you can see in the first sample line, there is sometimes IPs that are not recognized bots that have a surprisingly high traffic over your network.

If you want to identify these IPs use the service whois.net or install the bind-utils so you can use the "host" command and see the reverse DNS. Example:

$ host 66.249.78.72
72.78.249.66.in-addr.arpa domain name pointer crawl-66-249-78-72.googlebot.com.

Any IP not having a reverse DNS, chances are it is someone playing nasty. If you detect that these IPs are abusing your system you can always block their access:

iptables -I INPUT -s 95.27.xx.xx -j DROP

But this is not a permanent or desired solution at all. If you see this to happen a lot then you might need something more generic, like limiting the connections to the machine (caution with Bots!!). This is also helpful for some dDoS attacks. Example:

iptables -A INPUT -p tcp --dport 80 -m limit --limit 25/minute --limit-burst 100 -j ACCEPT

To better understand limits see this limits-mo TAGS:dule article.

Finally, at Obolog we use a fantastic tool called GoAccess that analyzes your logs and presents the information in a good looking format. See the screenshot:

Expand

Converting a CSV to SQL using 1 line in bash

 - The command line is very powerful and can do amazing stuff in one single line by pipelining a series of commands. This post is inspired after creating a line that mixed sed and awk, but with just only awk I'll show you an example on how to convert a CSV file to an SQL insert

Let's take an input CSV named events-2013-06-06.csv with 16 columns per line. It looks like this:

k51b04876036e2,192.168.54.67,3a8d6196,2013-06-06,03:29:42,started,,,,no active campaign,Spain,ca-es,,v1.0,1370482182
k51b04876036e2,192.168.54.67,3a8d6196,2013-06-06,03:29:43,first-run,,,,no active campaign,Spain,ca-es,,v1.0,1370482183
k51b04876036e2,192.168.54.67,3a8d6196,2013-06-06,03:30:17,close,34,,,no active campaign,Spain,ca-es,34,v1.0,1370482217
k51b0494a76071,192.168.54.67,febd870c,2013-06-06,03:33:14,started,,,,no active campaign,Spain,ca-es,,v1.0,1370482394
...

I took 16 because is more or less a real-life example. I added some empty columns too to make it more real.

And this is the SQL output we want:

INSERT INTO tracking VALUES ('k51b04876036e2',INET_ATON('192.168.54.67'),'3a8d6196','2013-06-06','03:29:42','started','','','','no active campaign','Spain','ca-es','','v1.0','1370482182','');
INSERT INTO tracking VALUES ('k51b04876036e2',INET_ATON('192.168.54.67'),'3a8d6196','2013-06-06','03:29:43','first-run','','','','no active campaign','Spain','ca-es','','v1.0','1370482183','');
INSERT INTO tracking VALUES ('k51b04876036e2',INET_ATON('192.168.54.67'),'3a8d6196','2013-06-06','03:30:17','close','34','','','no active campaign','Spain','ca-es','34','v1.0','1370482217','');
INSERT INTO tracking VALUES ('k51b0494a76071',INET_ATON('192.168.54.67'),'febd870c','2013-06-06','03:33:14','started','','','','no active campaign','Spain','ca-es','','v1.0','1370482394','');
...

The one-liner that produced the output above was:

cat events-2013-06-06.csv | awk -F',' '{ printf "INSERT INTO tracking VALUES (\x27%s\x27,INET_ATON(\x27%s\x27),\x27%s\x27,\x27%s\x27,\x27%s\x27,\x27%s\x27,\x27%s\x27,\x27%s\x27,\x27%s\x27,\x27%s\x27,\x27%s\x27,\x27%s\x27,\x27%s\x27,\x27%s\x27,\x27%s\x27,\x27%s\x27);",$1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16;print ""}'

Don't get scared yet... more generically:

cat your_file.csv | awk -F',' '{ printf "INSERT INTO table VALUES (\x27%s\x27,\x27%s\x27);",$1,$2;print ""}'

The big line decomposed:

  • The awk command will scan your buffer/file looking for commas, then splits every line into several fields. The comma was passed as parameter to awk as column separator: This was:
    awk -F','
    
  • Then awk will assign a numeric variable for every column found, allowing you to write any text using as placeholders these variables. If you want to ignore any of the columns, just don't use them when printing.
    '{ printf "...YOUR TEXT AND PLACEHOLDERS %s HERE...",$1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16;print ""}'
    
  • Then there is plenty of this strange \x27. Well, it is pretty messy in the reading but this is just an encoded way of writing single quotes so they are not interpreted as the command itself.

As a final note, do notice that I used printf in the command instead of print. The only difference is that using printf you put the %s placeholder in the text, while if you use print you write the number right away. So, if your CSV has a few fields maybe is more visual for you the print command, like this:

cat yourfile | awk -F',' '{ print "INSERT INTO tracking values(" $1 ")" }'

The only precaution here is that the $1 must be out of the quotes.

Let me know if any of these worked for you! Works like a charm!

And if you want to know more, there is a very cheap reference book for awk and sed:

Expand

Migrating a Github repo to Bitbucket (or similar services)

 TAGS:Github is awesome. Bitbucket is awesome too. They are both excellent services, but Bitbucket has a plus: it's free for private repos.

That's one of the reasons on why we decided to stop paying our $25/mo Github account for small projects and moved to Bitbucket. Although the Bitbucket guys have now a one-click "import from Github tool", the solution is so simple that I don't even think it is worth using it. Because at the end of the day, even if you uset it you'll need to change by hand where your origin points to anyway.

The following example is a Github to Bitbucket migration keeping the same files (no other clone needed). You can use this code for any service though, it works with any git repo:

First of all create your new blank repository and then go to your shell and do:

git pull
git push --mirror git@bitbucket.org:your_user/your_repo.git
git remote set-url origin git@bitbucket.org:your_user/your_repo.git
# or the same using HTTPS:
# git remote set-url origin https://bitbucket.org/your_user/your_repo.git

 These three lines do the following:

  • Get the last changes from your current origin
  • Copy the entire repository (with all the commit data) to the new origin
  • Change the old origin to the new location

After this you can git pull/push/whatever as if nothing happened. You don't need to clone anything again. You have Github migrated to Bitbucket!

 

 

Expand

Rellenar una columna con Hash aleatorio en MySQL

Tenemos unos cuantos cientos de datos y queremos crear un hash para poder acceder a ellos de forma directa y cifrada.

Imagina, por ejemplo, la típica tabla de usuarios en la que un campo contiene un hash para guardar en cookies y hacer el autologin por cookie.

Al crear el nuevo atributo este queda vacío así que necesitarás esta pequeña consulta para generar códigos hash de forma aleatoria y muy rápida:

UPDATE `users` SET autologin_hash = MD5(RAND()) WHERE autologin_hash IS NULL;

Fácil eh? ;)

 

Expand

Migrate Posterous without losing the images

 TAGS:It might seem very obvious to you that if you migrate from Posterous blogs to another service your images should be transitioned as well.

If you want a free service (as Posterous was) there are only two options where you can migrate your Posterous to without writing all your posts one by one again:

1) Wordpress.com (but losing all the images)
2) Obolog.com (and keeping all the images)

So, if you want your blog back including images the only option you have is Obolog. There are no other free services in the net (or at least I didn't find other) where you can bring back to life your Posterous blog. If you have already migrated to wordpress go and see where your images are pointing to. You'll be disappointed because all the posts reference posterous server, and they will be shut down in a few days.

If you still wonder what Obolog is, you are just reading one right now. Harecoded is powered by Obolog and it has been running smoothly for several years so far.

If you want to try it all you have to do is to upload your ZIP backup file into the Posterous migration script. But remember, on April 30th if you don't have the ZIP everything will be gone for good.