First time with Logstash Elasticsearch and Kibana

On February 2th my colleague Simonas Šerlinskas presented topic “Logs” on VilniusPHP event in interesting perspective. In 30 minutes he presented the way to grab and analyze huge amount of logs with nice graphical visualization using logstash, elasticsearch and kibana.

I’ve had no idea that I will need that trio next day during some logs analysis. It’s nice to have such powerful tools installed, configured and running in matter of hours, ready to accept and analyze data. Of course – most of the settings where default, no high availability, almost zero security (in-house, closed VM), but results was worth spent time. logstash + elasticsearch + kibana just did the job and then where just wiped.

Wish to have something like that many years ago… but.

No space left on device: Small files and inodes

I’ve run out of “free space” on building, testing and staging servers few times in last year with relatively small projects based on Symfony 2 or Zend Framework 2.

Used frameworks are rather small:

  • Symfony (2.4): 6450 files, 1283 folders, 46788608 bytes (apparent size 29894665)
  • Zend Framework (2.2): 2421 files, 427 folders, 17498112 bytes (apparent size 10912260)

So, framework or project files are not the issue, even if you build, test and deploy many times per day without removing previous releases (deployment process issue, fixed first). I’m talking in file size context.

So when you run out of free space – you login into server and type:

df -h

and see that you have half of partition empty (sometimes more), but when you try to create a new file you get: “No space left on device”.

But why? But how?

In my case it was inode count. I’ve run out of inodes on my partition. To see inode usage type:

df -i

So, inode (index node) is a data structure used to represent a filesystem object. Read more on Wikipedia or try to use search engines to find more info about inode.

At trouble making servers I’ve used default settings for my filesystems.

For example: if you have Ubuntu 13.10 and 4GB partition formatted with ext3 filesystem you will have 262144 inodes.
I’ve tried to copy Zend Framework 2 on that partition: 92 good copies, 1 corrupted copy, 2.2 GB free and out of inodes – waste of disk space. With Symfony 2 I’ve got 33 copies and out of inodes.

How to solve this issue? Buy bigger drive or increase inode count when you create filesystem on partition.

I’ll try to calculate optimal inode count for 4Gb partition with ext3 filesystem for both frameworks with maximal copies count. It might be a synthetic example, but if you automate builds of many projects with similar file count and file size ratio – this might help.

Partition size is about 3781115904 bytes, so we can copy ~80 Symfony 2 copies or ~216 copies of Zend Framework 2. Symfony 2 will require about 618640 inodes and Zend Framework 2 about 615168 inodes (inode per file or directory). Lets create ext3 filesystem on 4GB partition with 620000 inodes. Command for example:

mkfs.ext3 -N 620000 /dev/sdb1

I’ve tried to copy Zend Framework 2 on that partition: 216 good copies, with Symfony 2 I’ve got 79 copies – more than twice bigger.

Another way to calculate inodes count for partition: average file size in your project. Zend framework 2 7227 bytes, Symfony 2 7254 bytes, so on 3781115904 bytes partition we might have up to 522254 files (with avg.: 7240 bytes per file).

Conclusion: default filesystem settings not always the best choice for build, testing or staging servers. Look at your project or projects you will place on your servers, do some calculations – you might get better disk space usage for same price. Don’t forget, that you might need to place Composer cache somewhere on your build server – PHP projects/frameworks/libraries have quite big amount of smaller files in our times (in development versions even more) – this knowledge might be handy.

This calculations might not be suitable for production servers – user uploaded content might change average file size and your inode count might be a penalty. I never tested is there any performance penalties (or other drawbacks) if you increase inodes count.

Don’t forget that this rules apply only for filesystems with inodes, like ext2, ext3. Ext4 might have other rules (depends on settings). There are filesystems without inodes too.

Front-end package managers

What I’ve missed in 2013: Front-end package manager.

I just hate to go to vendor websites, downloading zip/tgz/tar.bz2 files, unpacking them, coping files, placing in some folders, and so on, and so on. Every time you need to install new front-end library or framework – you do the same. Every time you need to update something – you do the same.

So, the short list of Front-end package managers:

Useful quick comparison of them done by Wil Moore III at his GitHub account: Front-End Package Manager Comparison.

All looks great with small exceptions: packages support from vendors. Not all packages are present in all managers or there are old versions, not all vendors want to support package managers.

I hope everything will change in matter of months or a year.

Encrypt password on Ubuntu/Debian

Sometimes I need to encrypt password same “way” as it done in /etc/shadow. For example: to place it in the Puppet config. I think there is a plenty of ways to do it. Here is my favorite: mkpasswd

# Install "whois" package in case we don't have it
sudo apt-get install whois
# Get available encryption methods
mkpasswd -m help
# Encrypt using SHA-256
mkpasswd -m sha-256
# Encrypt using SHA-512
mkpasswd -m sha-512

Composer/Satis and GitHub Rate Limits

Composer/Satis and GitHub Rate Limits – hit this issue today. 60 requests per hours is not so match when you use Composer to build a project few times or you try to build local package repo with Satis with empty composer cache.

Actually, I wonder, why I didn’t hit rate limits (introduced in October 2012) earlier with Continues Integration building project few times per hour. Possibly Composer cache saved the day (TTL of cache is about 6 months by default). But I see many request (or issues) over Internet about this issue… and spent couple of hours solving it today to achieve my goals.

One of the best solutions I’ve found: Alister Bulman – Avoiding Composer Being Rate-limited by Github and it works perfectly with Composer and failed with Satis (at least for now).

After few “var_dump” of Satis and Composer I’ve found that Composer reads “global” configuration file from “COMPOSER_HOME” directory (next time do some RTFM: COMPOSER_HOME/config.json) and merges with local project settings.

So, if you place your GitHub OAuth key, created by Alister Bulman instructions, into COMPOSER_HOME/config.json file – you won’t need to place it anywhere else until you hit 5000 limit.

Example of COMPOSER_HOME/config.json:

{
"config": {
"github-oauth": {
"github.com": "<your GitHub OAuth Key>"
}
}
}

From now my Composer and Satis works fine. Might help if you use Continues Integration servers.

More about COMPOSER_HOME directory.

Some obvious things learned hard way.

Zend Framework 2 Cache Storage Factories

If you are making an application with Zend Framework 2 (version 2.2) and you need some data Caching – you will find good examples on Zend\Cache usage, but not much about cache initialization or how to store cache configuration.

If you are able to use Google or Bing (or any other modern search engine) – you will find many example how to initiate cache using closures or your own factories or any other fun ideas… but!

ZF2 provides two great ways to store cache configuration for cache storage adapters and initialization of Cache services trough ServiceLocator:

  1. Zend\Cache\Service\StorageCacheFactory
  2. Zend\Cache\Service\StorageCacheAbstractServiceFactory

Both of them are just a Service factories that could be used easily trough ServiceLocator. Storage adapter settings could be set in any configuration file you like (from any module.config.php to config/autoload/any.local.php).

The last one, StorageCacheAbstractServiceFactory, is enabled by default in ZendSkeletonApplication.

If you are using one cache adapter trough all you application, you can just do three things:

1) Add Service with desired name for Cache service which would be initialized by Zend\Cache\Service\StorageCacheFactory somewhere in you Application or Module config file:

return array(
'service_manager' => array(
'factories' => array(
'Application\Cache' =>
'Zend\Cache\Service\StorageCacheFactory',
),
),
);

2) Add Cache Adapter configuration in you configuration files under the key ‘cache’:

return array(
'cache' => array(
'adapter' => array(
'name' => 'filesystem'
),
'options' => array(
'cache_dir' => 'data/cache/',
// other options
),
),
);

3) Use Cache adapter trough Service Locator:

// example from controller
/**
* @var $cache \Zend\Cache\Storage\StorageInterface
*/
$cache = $this->getServiceLocator()->get('Application\Cache');

If you need many different cache adapter with different settings – use StorageCacheAbstractServiceFactory:

1) Add Abstract Service Factory somewhere in you Application or Module config:

return array(
'service_manager' => array(
'abstract_factories' => array(
'Zend\Cache\Service\StorageCacheAbstractServiceFactory',
),
),
);

2) Add Cache Adapters configuration in you configuration files under the key ‘caches’ by associative array “ServiceName => Settings array”:

return array(
'caches' => array(
'CacheServiceOne' => array(
'adapter' => array(
'name' => 'filesystem'
),
'options' => array(
'cache_dir' => 'data/cache_dir_one/',
// other options
),
),
'CacheServiceTwo' => array(
'adapter' => array(
'name' => 'filesystem'
),
'options' => array(
'cache_dir' => 'data/cache_dir_two/',
// other options
),
),
// more cache adapters settings
),
);

3) Use Cache adapters trough Service Locator in your Application:

// example from controller
/**
* @var $cacheOne \Zend\Cache\Storage\StorageInterface
*/
$cacheOne = $this->getServiceLocator()->get('CacheServiceOne');
// ...
/**
* @var $cacheTwo \Zend\Cache\Storage\StorageInterface
*/
$cacheTwo = $this->getServiceLocator()->get('CacheServiceTwo');

That all: no magic, everything configurable and no closures – only factories.

Cover Photo

Difference engine at the Computer History Museum

Fully operational difference engine at the Computer History Museum in Mountain View, California


This morning I’ve remembered about photos done in USA back in 2010 during visiting Web 2.0 Expo in San Francisco, California. After that event we here visiting Computer History Museum and I’ve made a photo of Babbage Difference Engine. So this is part of it: number wheels :-)

Impressive thing, isn’t it?

It’s O.K. It’s a tiny change!

It’s O.K. It’s an appearance update and WordPress upgrade.

It takes a year to write here something, at least so short as this post. Most of blog-post become irrelevant in matter of weeks – while you find time to write them and distil the idea, so they die in agony somewhere in Trash or Google Drive/Docs. Oh, and we have Micro-blogging and Social Networks around as time killers…

It takes four years ( damn, time goes fast o_O ) to find time and update WordPress on this blog. Glad WordPress is still alive and became better with version 3.x It was never so easy to make order in this chaos.

I’ve removed some old texts in Russian about Apache and PHP installs, and something else – irrelevant old stuff.

Now I need cool photo for the website’s header (I don’t have such even on my Facebook :-D ). Hope to find one until the end of the year ;-)