All the books I read in January

A few days ago, I came across a link to a post by Thomas Oppong where he exposes techniques he employs to read an entire book in a single day. I jumped at it like a hungry lion to uncover secrets that will make me read more and maximize my time.

Reading can never be overemphasized. There are few ways to become good developers and one is to read books. We get a deep understanding of  development by reading and learning from people ahead of us.

January has always been a month of new resolutions for some people including me. I actually did make a lot of promises and one of them is to study like crazy and work out like hell till gym instructors chase me out of the gym to rest :).

Unfortunately, gym instructors haven’t chased me out yet and I have not gotten to read up to the capacity I wanted but I do have some books that added values to my life in January and would love to share them with you.

It’s most likely you have read some of them but if you have not, you should definitely read them. They are worth it.

Lumen Programming Guide- Writing PHP Microservices, Rest and Web Service APIs -Paul Redmond

This book explains everything you need to know from setting up your development environment to creating rich APIs with Lumen Framework. If you want to write testable APIs with lumen: this is book for you.

PHP 7 Programming CookBook -Doug Bierer.

I’ve been interested in scalability for some time now and this was one of the books I enjoyed reading. It uncovers PHP 7 performance features, exposes ways to build scalable websites, shed light on developing middleware, design patterns and web security. He delves into advanced algorithm in PHP, best practices, and patterns. If you have not read this book, I encourage you to do so.

Developing Microservices with Node.Js – David Gonzalez

If you have ever wanted to write microservices in Node.Js David Gonzalez got your back on this. He uncovers everything you need to know to build your first microservice in Node.Js using Seneca.

NodeJs for PHP Developers – Daniel Howard
This book is particularly good for PHP developers who want to learn or convert a PHP project to Node.Js. It takes you through all the steps to get started on building Node.Js application based on your existing knowledge of PHP

Scaling PHP Apps – Steve Corona

Of all the books I read in January, hats off to Steve Corona on this. He narrates his experience with scaling TwitPic and covers everything from DNS, MySQL to Nginx. The book is good for every PHP developer and most especially if you work with a startup company. Personally, this is one of the books I will continue to consult as  need arises.



3 ways to boost your PHP backend performance

Performance is something I have been intrested in for a while now. I have not only been concerned about writing code that works but also code that performs considerably at the face of large input or traffic.

Since performance is a function of all integral parts of a system, in this post, I emphasize on some few existing ways to improve your PHP backend performance.Whether you are refactoring a legacy application or starting a new project entirely, either way, you will find this post helpful.

Make use of Generators

Generators are simple iteraror introduced in PHP 5.5.0  that are more memory efficient. The standard PHP iterator iterates on data set already loaded in memory, this is a very expensive operation for large inputs as they have to be loaded in memory.

Generators are more memory efficient for such operations due to its ability to compute and yield values on demand.

Most applications accept and process uploaded Excel/CSV from users, performance cost for a very small CSV file with few lines might be inconsequential but becomes very expensive for a large file as shown in the example below.
To show this, we will grab a 25MB csv file from this GitHub repository . It contains names of cities around the world.

function processCSVUsingArray($csvfile)
    $array = [];
    $fp = fopen($csvfile, 'rb');
    while (feof($fp) === false) {
        $array[] = fgetcsv($fp);
    return $array;

//using generator

function processCSVUsingGenerator($csvfile)
 $fp = fopen($csvfile, 'rb');
 while (feof($fp) === false) {
 yield fgetcsv($fp);

Now, we compare perfomance of the two functions using this code:

$time = microtime(TRUE);
$mem = memory_get_usage();

$file ="cities.csv";
foreach (processCSVUsingArray($file) as $row) {
    //do something with row;

    'memory' => (memory_get_usage() - $mem) / (1024 * 1024),
    'seconds' => microtime(TRUE) - $time

Function processCSVUsingArray exited with a fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 4096 bytes)while processCSVUsingGenerator used 0.00014495849609375MB o f memory and   took 10.339377164841 seconds to complete

With a generator, you can use foreach to iterate over a set of data without needing to build an array in memory, which may cause you to exceed a memory limit, or require a considerable amount of processing time to generate.

Always Cache your data

Caching is an important aspect of optimization and you rarely talk about performance without it on the list. Is it really necessary to make a request to the database or external API each time a certain value is requested? Sometimes, the answer is capital NO and this is where caching should come to play.

For instance, a list of countries, states or province is likely not going to change over a certain period of time, such data should be in cache for faster retrieval.

Personally, I found out that, coming up with  a list, such as the one shown below that  shows how frequent  values are computed or updated in my application allows me to fully harness the power of cache.

Depending on your application use case, your list should be different from the one in this example. For instance, if you are building an application where users can add, edit or delete countries, caching such data for a month is obviously  a bad idea.

Enable OpCache

OpCache has been around from PHP5.5. PHP is an interpreted language, scripts are parsed and compiled at runtime. Both compute time and resources are needed for this process, this results to some performance overheads as every request follows this loop.

OpCache was built to optimize this process by caching precompiled bytecode such that the interpreter does not have to read, parse or compile PHP scripts on every request.

By default, OpCache is disabled in apache.

To enable OpCache in apache, updatephp.ini  with this configuration.

opcache_revalidate_freq = 240



© 2018 Samuel James

Theme by Anders NorénUp ↑