Doug Toppin's Blog

My thoughts on technology and other stuff

Engineers and Coffee Podcast

Fascinating discussion towards the end of this Engineers & Coffee episode at about DropBox moving from AWS. They mention a number of very good points including spending money to simply move to their own hosting rather than adding features, using CapEx expenses to make their IPO appear more attractive and several others.

There are some interesting details to consider about letting your cloud costs escalate. Examples of them might include object storage and data movement. Your architecture planning and design should include not only near-term requirements and needs but also take a long-term view. The long-term view would mean estimating the eventual size of your given given some amount of success with bringing in customers. Consider what gigabytes and terabytes of storage might mean in costs and whether or not you can include approaches that will not cause significant costs later if you are successful.

Vendor Lock-in and Cloud Providers

I have heard the phase (fear of) “vendor lock-in” used several times in the last few weeks associated with cloud providers. I’ve been wondering if there really is such a thing? By that I mean taking advantage of any cloud provider facilities/capabilities may save you time and reduce cost. Moving to another provider just means mapping your interfaces/functions to the new provider features or else implementing what is missing.

This means that you are not locked in in the sense of never being able to change providers. To me it means that if you should ever have to move you might have to use some of the money that you saved in the first place to implement or re-architect what does not exist in the new environment.

If you do not take advantage of any provider facilities you have increased your costs and lengthened the schedule for a potential future move that might never be necessary or happen.

Amazon Echo and Commercial Integrations

I have had an Amazon Echo for a few months and am satisfied with what it can do. The Echo is a device that you use voice commands to interact with it rather than a keyboard or mobile application. It comes with a number of basic capabilities including weather, news and playing music. One of the nice aspects of the Echo is that additional applications (what are called “Skills”) can be written for it and enabled by users. A recent addition that is an excellent example of an innovative commercial integration for it is the Capital One skill. This provides Capital One account holders with the ability to get account information and even make credit card payments. I am impressed that Capital One has taken this step to provide more convenience to their customers.

I am confident voice interactions have a large number of practical usages that we will see added to the Echo capabilities. I also expect that we will begin to see numerous other appliances and devices start providing a voice interface.

AWS IoT and the MQTT.fx Client Updated

In the previous post I passed along a few tidbits about using the MQTT client with the AWS IoT service.

Since then both the AWS IoT service and the MQTT client have been updated. I decided to do an updated post with any new info. The AWS IoT configuration pages have noticeably changed.

The current MQTT client can be found at There are a few visible differences in the client configuration.

The IoT resources main panel now looks like this.

The certificate resource page now looks like this and the downloads will save as files that help with identifying which they are used for.

The cert names will appear something like these


In the previous post I had to replace newline characters in the cert files but this time I did not and was able to use them as is.

The MQTT client connection settings page will look something like this.

The root-cert can be found in the AWS IoT document here.

The pub/sub functions in the MQTT client worked without any trouble. The Log tab in the client can provide useful information if you run into any trouble. If there are any connection issues the cause should appear in the Log.

Jekyll Experiences Update

I’ve used a few different blogging platforms over a few years with Jekyll being my most recent starting months ago. I chose Jekyll because I wanted a static site blog rather than a dedicated instance running a blogging platform. I also wanted to have it hosted using GitHub Pages.

Jekyll is a markdown based blogging system that provides the basics. I had no real problems with it until I tried a recent Jekyll 3.x update and was no longer able to create the site.

I ran across errors like this

$ bundle exec rake generate

Generating Site with Jekyll
identical source/stylesheets/screen.css
Configuration from
Building site: source -> public
Liquid Exception: Unknown tag 'include_array' in post
/Library/Ruby/Gems/2.0.0/gems/liquid-2.3.0/lib/liquid/block.rb:62:in `unknown_tag'
/Library/Ruby/Gems/2.0.0/gems/liquid-2.3.0/lib/liquid/tags/if.rb:31:in `unknown_tag'
/Library/Ruby/Gems/2.0.0/gems/liquid-2.3.0/lib/liquid/block.rb:32:in `parse'

After spending a few hours over a few evenings trying to figure out what I missed in either the update or configuration changes I finally gave up on Jekyll 3 and reverted with this

$ sudo gem install jekyll -v 2.5.3

After that, and reverting the changes I had made to _config.yml and Rakefile I appear to be back to normal and am able to post again (if you are reading this then it was successful).

The real intent of this post is to pass along my thoughts of using Jekyll, static sites or blogging in general. I like static sites because they provide the lowest cost platform while still giving you some element of creativity. The drawback of course is that if you want something simple that just lets you post you may be in for trouble particularly in terms of updates to the system. It is also important to ensure that whatever system your choose you are preserving your posts in a manner that lets you rehost them or migrate to another system. I think that both GitHub Pages and markdown based systems provide those abilities but you will probably have to experiment and learn a little more than you might have intended.

More to come on this subject.

More on Augmented and Virtual Reality

A day does not go by now without my seeing another news story post about augmented or virtual reality. These areas are ripe for innovation because technology is providing both the display capability and compute power to generate aspects of reality that match something the user is doing. That may be to completely render the scene the user is viewing for virtual reality or to add layers of information for augmented reality. I believe that 2016 will be the year that consumer level technology will significantly raise the demand for content of one sort or another.

Google Cardboard has already let the consumer experience virtual reality and a number of applications are taking advantage of it. I already have more than 8 applications on my iPhone 6 that involve either AR or VR for Cardboard. Each of those provides a view into different approaches to creating content with some producing better results than others.

One issue that I have had with both Cardboards, from different providers, that I have is a propensity for double vision while viewing at least some content. I am guessing that this is due to my interpupillary distance (IPD) which is the distant between your pupils. One Cardboard maker is Unofficial Cardboard that has a unit with lenses that can be moved to better conform to your IPD. I ordered their 2.0 unit and am expecting it soon so I should be able to compare the experience with my previous units. The specific one that I ordered was

I am hoping that this solves or at least reduces the double vision issue and should be able to report on that soon.

More to come.

AWS Certificate Manager Tidbits

Amazon AWS recently enabled an ssl certificate manager service. This service makes the process of creating and using SSL certificates much easier than the more manual methods previously required.

There are several sites and blogs that have been published explaining how to use this service that I have found useful.

They include the following

I wanted to pass along any useful pieces of information about my own experience with the service. I am still new to it but one thing to be aware of being using it is that to request a certificate AWS will send emails to the following addresses associated with your domain to ensure that you actually own the domain. Those addresses include the following where @yourdomain is replaced with your domain.

  • hostmaster@yourdomain
  • administrator@yourdomain
  • webmaster@yourdomain
  • postmaster@yourdomain
  • admin@yourdomain

It is useful to ensure that you have set up those addressess with your domain registrar before starting the certificate request so that the process moves smoothly. It is also a good idea to test and confirm that the addresses that you set up work correctly if they did not already exist.

Driverless Cars and Google vs Big Auto

A friend recently recommended the book “Driverless Car Revolution: Buy Mobility, Not Metal”. I’m only partially in it but the author is pointing out that Google and “Big Auto” have completely different approaches to developing driverless cars. It is very analogous to “The Innovator’s Dilemma” (another excellent read) where someone with an existing product base and history is constrained from innovation where an entirely new enterprise is not.

If you are looking for entertaining reading this is a great one for the Kindle this weekend.

The Risk of Cyberattack and Motivations

While listening to a recent NPR podcast with an interview of Ted Koppel I heard of number of statements that I started thinking about. The reason for the interview was that Koppel has written a book called “Lights Out: A Cyberattack, A Nation Unprepared, Surviving the Aftermath” on the risks associated with a cyberattack on the power grid. A number of statements are being made on that subject that are interesting to evaluate.

One is “knock out the power grid” made by Janet Napolitano. One of my first questions on that is for how long? By that I mean knock it out permanently, 1 year, 1 month, 1 day, 1 hour or less?

Statements like these are being used to assess the risk and therefore justify significant federal expense in mitigating those potential risks. Koppel made several points including that “a group of 10 former senior government officials” had written a letter “regarding a cybersecurity bill” that said an attack could “knock out power over an extended geographic area involving 10s of millions of people over a period up to 2 years”.

With that I wondered if those former officials now worked for defense contractors that would stand to benefit from federal contracts associated with cybersecurity?

These statements take me back to Y2K which was said going to be a catastrophe and ended up being somewhat less so. However, that risk resulted in significant expense in examining software or firmware that might be a concern.

At times I wonder if we had not done anything or significantly less that was done regarding Y2K would anything have happened?

I frequently hear that we are experiencing many cyberattacks per day but generally the gravity of those so called attacks is not included. What is considered an attack? It may range from a failed login attempt all the way to a coordinated and well planned highly technical series of actions by a foreign entity.

The scale between those two ends is very wide and any statement about attacks should include where on that scale the numbers range. No single number indicates any useful information.

Being Prepared for Commercial Usage of Drones

I stumbled on an older article about a drone incident (28 August 2015) where a few statements caused me to think more about how drones have to be used in commercial applications to ensure the safety of the public and property.

The article can be found at

The statements that caught me eye were The crew commenced alternate recovery procedures and concluded that radio frequency interference was the most likely cause of the accident.

The reason that this caught my eye was that if anyone decides to use a drone in a commercial operation are they prepared for events such as these? By prepared for I mean have plans, procedures and operator training.

  • If you are using a drone to film an event or area and lose control of it what are you going to do?
  • If you are planning to use a drone do you take into consideration what frequency to use for control or does your equipment even accommodate using specific channels?
  • Are other drone operators likely to be in the area and might there be conflicts with the airspace and vehicle control?
  • Will multiple vehicles be trying to get the same view and raise the risk of collision?
  • In preparation for the event will each drone operator coordinate with each other when and where their vehicles will be?

It seems like there is a great opportunity for mission and flight planning systems that allow this sort of coordination at a much smaller level that ever previously necessary. This may even fall to the single individual level where a commercial filming application is an enterprise of one person. Note that that individual likely will not have a “Flight Operations Department” that might be found in larger companies so the use of the planning system will have to require only a small investment in money and time.

The technology for drones has advanced so rapidly that it it tempting to simply start using them without the planning and preparation necessary to eliminate or even reduce the risk of accident.

I do not envy the FAA in being required to plan and implement the above without introducing impediments to the adoption of drones in U.S airspace.