AuthorSamuel James

Reducing response time to bugs in production

One thing  is common to most startup companies:  products need to be built fast, pushed out as soon as possible while delivering the best possible customer experience at the moment.  The problem is that bugs can easily find their way to production.

If you are a developer, you would agree with me that sh*ts do happen in production. Bugs do find their way to production, some are pretty bad and can undermine your effort to deliver  great customer experience.

Some bugs are difficult to discover even in best test environments and nothing can uncover them except real users of the application themselves. Users will never use your application the way you intended it anyway. There will always be some level variation in the way people use things.

But a  quick response to customers’ problems will ultimately increase your customers’ satisfaction and trust.

” The organizations that enable their teams to quickly respond to problems in production code create the highest quality software.”
                 – Sifter Software Quality Academy

Bugs come in so many ways, some are from third-party SDK or libraries you imported into your code. It has happened to me on several occasions and the most recent one came from AWS PHP SDK.

On that particular day, I got a request of users unable to upload files. That was really strange.  I finally tracked down this bug and discovered that a hotfix we had pushed to production had triggered a composer update which updated AWS PHP SDK to v3.31.0.   This particular version of AWS SDK  has a bug and throws an exception whenever you try to upload files to S3 bucket.

It dawned on me, we were not doing something right. Our response time was too slow. Why must customers report this before we know?

What could we have done better?

Errors that are capable of impacting user experience like this should have been discovered or known long before it’s reported.

Yes! Exceptions are logged but logs are not checked often-  I’m yet to meet a developer who is consistent with checking logs especially on weekends with the hope of finding and tracking a bug down.  If you know any, shoot me a mail – I must build a startup with him 🙂

Email Notification to our Rescue

I  respond to emails pretty fast, if you send me a mail, chances are high that it will be read within 2 minutes it arrived. The same goes for everyone on the team.

This means we can easily turn ourselves to a SWAT team of developers that get real-time alerts whenever the unexpected happens.  I made a modification to the way exceptions are handled in the code.

Here is the modification I made to  exception handler.

public function report(Exception $exception)

        if (App::environment('production') && $this->shouldReport($exception)) {
            $flattenException =FlattenException::create($exception);

            $url =url()->current();

            $ip =request()->ip();

            $input =request()->all();  // get user input if any

            $user =isLoggedIn()?user():null; //get user details if he's logged in

            $handler =new \Symfony\Component\Debug\ExceptionHandler();

            $debug_info =$handler->getHtml($flattenException );

               ->send(new ErrorExceptionMail($debug_info,$url,$ip,$input,$user)); 

        parent::report($exception); //do traditional logging, stackify, logmatic etc

Not all exceptions are reported, we don’t want to get our mailbox filled up with 404 exceptions, validation exceptions or authorization, such are filtered out.

Through this, we have been to reduce our response time by 90% because everyone is alerted once there is a problem.

Whether you are on your couch reading what is new in Laravel 5.6 or passing some time on the beach with hot bikini-wearing models, you’ll get a notification.

This may not be the best way out there one can employ to handle errors on production environment but this can be helpful if you are running a small team in a small startup company.

Is there anything you think we could have done better?  Feel free to share your thoughts or share something new with me. 🙂




3 ways to boost your PHP backend performance

Performance is something I have been intrested in for a while now. I have not only been concerned about writing code that works but also code that performs considerably at the face of large input or traffic.

Since performance is a function of all integral parts of a system, in this post, I emphasize on some few existing ways to improve your PHP backend performance.Whether you are refactoring a legacy application or starting a new project entirely, either way, you will find this post helpful.

Make use of Generators

Generators are simple iteraror introduced in PHP 5.5.0  that are more memory efficient. The standard PHP iterator iterates on data set already loaded in memory, this is a very expensive operation for large inputs as they have to be loaded in memory.

Generators are more memory efficient for such operations due to its ability to compute and yield values on demand.

Most applications accept and process uploaded Excel/CSV from users, performance cost for a very small CSV file with few lines might be inconsequential but becomes very expensive for a large file as shown in the example below.
To show this, we will grab a 25MB csv file from this GitHub repository . It contains names of cities around the world.

function processCSVUsingArray($csvfile)
    $array = [];
    $fp = fopen($csvfile, 'rb');
    while (feof($fp) === false) {
        $array[] = fgetcsv($fp);
    return $array;

//using generator

function processCSVUsingGenerator($csvfile)
 $fp = fopen($csvfile, 'rb');
 while (feof($fp) === false) {
 yield fgetcsv($fp);

Now, we compare perfomance of the two functions using this code:

$time = microtime(TRUE);
$mem = memory_get_usage();

$file ="cities.csv";
foreach (processCSVUsingArray($file) as $row) {
    //do something with row;

    'memory' => (memory_get_usage() - $mem) / (1024 * 1024),
    'seconds' => microtime(TRUE) - $time

Function processCSVUsingArray exited with a fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 4096 bytes)while processCSVUsingGenerator used 0.00014495849609375MB o f memory and   took 10.339377164841 seconds to complete

With a generator, you can use foreach to iterate over a set of data without needing to build an array in memory, which may cause you to exceed a memory limit, or require a considerable amount of processing time to generate.

Always Cache your data

Caching is an important aspect of optimization and you rarely talk about performance without it on the list. Is it really necessary to make a request to the database or external API each time a certain value is requested? Sometimes, the answer is capital NO and this is where caching should come to play.

For instance, a list of countries, states or province is likely not going to change over a certain period of time, such data should be in cache for faster retrieval.

Personally, I found out that, coming up with  a list, such as the one shown below that  shows how frequent  values are computed or updated in my application allows me to fully harness the power of cache.

Depending on your application use case, your list should be different from the one in this example. For instance, if you are building an application where users can add, edit or delete countries, caching such data for a month is obviously  a bad idea.

Enable OpCache

OpCache has been around from PHP5.5. PHP is an interpreted language, scripts are parsed and compiled at runtime. Both compute time and resources are needed for this process, this results to some performance overheads as every request follows this loop.

OpCache was built to optimize this process by caching precompiled bytecode such that the interpreter does not have to read, parse or compile PHP scripts on every request.

By default, OpCache is disabled in apache.

To enable OpCache in apache, updatephp.ini  with this configuration.

opcache_revalidate_freq = 240


FacebookTwitterGoogle+Reddit & iamtheCode hackathon: Nigeria Edition partnered with iamtheCode to organize a 2-day hackathon in Nigeria.

My team chose track 2 which is to stop human trafficking with the use of technology as well as raise awareness on anti-slavery and sex labor.

With API at our disposal, there were a lot of ideas that sprung up from my team members.

Main Idea

Our idea is to create a simple mobile and web app that could be used to track and arrest traffickers around the world. We believe trafficking is not a one-man thing. There are often witnesses around during the course of this nefarious act.

With a robust reward system for people who report this act, witnesses can use their smartphones to take pictures trafficking in hotels, brothels and upload either via the whistleblower web app or mobile app. The geo-location coordinates are sent to the backend along with the pictures which can be used to nab the suspects.

Uploaded pictures can be processed using image processing technologies and compared with known pictures of  places from API.

Victims without smartphones can use USSD or send the keyword “Trafficking and Location” to a Twilio phone number (+12564726672) which is then routed back to our API backend for processing.

The code for the app and and apk file can be found here

We came third at the end of the hackathon.


How We Built an Intrusion Detection System on AWS using Open Source Tools

It’s roughly a year now that we built an intrusion detection system on AWS cloud infrastructure that provides security intelligence across some selected instances using open source technologies.

As more instances were spun, real-time security monitoring became necessary. We wanted the capability to detect when someone attempts an SQL injection, an SSH brute force, a port scan and so on. I forgot; we didn’t even want a ping request to go unnoticed if it was possible to ping any of the instances from the public and finally, centralize security logs from multiple EC2 instances which would then be visualized with Kibana.

AWS supports third-party IDS/IPS tools like Trend Micro, Alert logic to name a few, which are pretty good, however, we wanted to try and explore the possibility of getting close to what they offer using open source tools available at our disposal. Some of the best IDS and HIDS available (Snort, Suricata, Ossec) are open source and are actively supported by a large community. We could install them separately on each EC2 instance, this would have defeated our aim of having a centralized log of all security events and also would bring some maintainability issues.


Security onion has been around for a while, a project started by Doug Burks and finds good use in monitoring home network but its usage in the cloud is an area that has not been fully explored. It’s a Linux distro based on Ubuntu and comes with Snort, Suricata, Bro, OSSEC, Sguil, Squert, ELSA, Xplico, NetworkMiner. In short, it’s bundled with all the tools one would need for a powerful and free network monitoring system. I will not dwell on how to setup security onion because of already existing and compressive documentation which can be found on security onion wiki page.

Having read SANS paper titled Logging and Monitoring to Detect Intrusions and Compliance Violation with Security Onion, we decided to try it out by selecting a VPC with 2 subnets and some few EC2 instances. Basically, these instances power public-facing websites, and often witness low to moderate traffic a day.

Since security onion works by analyzing traffic and logs on the host machine, in other to get analysis for an instance, one must first find a way to mirror traffic from all the instances to security onion sensor.

AWS does not provide a tap or span port, at least none that I know of. So, we made use of netsniff-ng for the virtual tap which copies traffic from the instance to an OpenVPN bridge and transports the traffic to security onion sensor where it is then analyzed.

On each instance there is an OSSEC agent and a virtual tap. The purpose of OSSEC agent is to provide host-instrusion detection system (HIDS) that is, monitors events happening at the host level and reports back to the security onion server via the OSSEC encrypted message protocol, while the virtual tap mirrors traffic at the interface level and forwards that via an open VPN bridge to security onion server for analysis, serving as network-intrusion detection system (NIDS).



Security Onion does not only support analyst tools like squert, squil,elsa, that can be used to access realtime events, session data, and raw packet captures but also ELK as at the time of writing this post.


Note: This post was also posted on medium by me


My team won the last UN hackathon in Africa


My team won the united nation Hackathon that took place in 9 regions of the world in 2017.

We had spent two nights working on an integrated transport system that gives commuters real-time information on bus routes, stops and fares via a mobile application, this we tagged “smart transit”.

The prototypes we pitched included a mobile and a web application integrated with Twilio API to enable users without a smartphone to interact  with the service using USSD code on non-smartphones.

A big thank you to my awesome team members for the hard work.


© 2018 Samuel James

Theme by Anders NorénUp ↑