Doug Toppin's Blog

My thoughts on technology and other stuff

Thoughts on Data Access and Monitoring

In reading about the recent Verizon subscriber data breach I recognize even more the importance of monitoring access and actions on data in cloud accounts. By that I mean in the Verizon case a contractor was apparently able to access and download a large amount of subscriber data. Then that data was put into a public AWS S3 bucket. This demonstrates the importance of several architectural aspects of your system including the following

  • encrypt data at rest and restrict key access
  • enable access to critical data sets by IAM role
  • use API activity monitoring to note unusual accesses even by allowed accounts, for example “account x just did a very large query or a large number of queries”, this might be done via AWS resources (CloudWatch for example) or external tools such as a log aggregation system that can detect and alert
  • configure temporary roles or credentials to allow access to keys or data
  • scan your account resources for things like public S3 buckets on a regular basis
  • have a plan and communication or reporting channels in place to react quickly, in the Verizon case it appears that a number of days passed after detection before anything response was made

Each of these actions take planning in advance and effort but the cost of not doing them or missing one can be much greater in actual labor, reputation or goodwill down the line.

AWS SSM and the Parameter Store

An issue that I regularly encounter is how to store things like passwords and tokens in a manner that allows access to them across ec2 instances and desktops. One way is to use the AWS SSM parameter store. (admin console->ec2->Parameter Store). All you need to do is use the ‘create parameter’ function with a name and value. Retrieving it will return a json object that includes the value.

Accessing it via the cli is then done like this:

$ aws ssm get-parameters --names "test-parameter-1"
{
    "Parameters": [
    {
        "Name": "test-parameter-1",
        "Type": "String",
        "Value": "test value 1"
        }
    ],
    "InvalidParameters": []
}

An example of parsing a specific value using jq follows

$ aws ssm get-parameters --names "test-parameter-1" |jq .Parameters[].Value
"test value 1"

Finding what parameters are available can be done like this:

$ aws ssm describe-parameters
{
    "Parameters": [
    {
        "LastModifiedDate": 1499869122.618,
        "Name": "test-parameter-1",
        "Description": "experimenting with parameter store stuff",
        "Type": "String",
        "LastModifiedUser": "arn:aws:iam::xxx:user/xxx"
        }
    ]
}

Creating a parameter via the aws cli would look like this

$ aws ssm put-parameter --name test-parameter-2 --type String  --value test-value-2

DJI Mavic Pro Drone Goggles

A couple of months ago I bought a DJI Mavic Pro drone. In a single word is it amazing. The technology in it is excellent.

Today I received the DJI Goggles which is a headset with a display that gives you the camera view from the drone.

I just got around to doing the actual setup on the goggles. The total weight is 2.4 lbs with the weight being in the headband where the electronics and battery are. The headband is adjustable in the back to tighten/loosen it up. Once it is on and the display portion is lowered over your eyes you do not feel the weight at all. That might change once I’ve been wearing them for a while. There is a touchpad on the side of the display with numerous functions. I went through the tutorial and retained pretty much nothing. Once the setup and linking with the drone were complete you immediately get the drone camera view in the display. At that point both the phone app camera view and goggles camera views were active. Both the controller/phone and headset are linked to the goggles. There are a boatload of functions associated with the headset and it will take a lot of practice to get any familiarity with it.

DJI recommends having a second person use the goggles while someone else pilots the drone with the controller for safety (and I can see why). This is not a toy but it looks like it going to be a ton of fun.

It is hard to believe how much technology is sitting in your hands between the drone, the controller/phone and the goggles.

AWS Config and Lessons Learned

Lesson learned today that taught me the value of using the AWS Config service to check for expected resource usage. I inadvertently launched an ec2 c4.large instance yesterday while experimenting with the Batch service (it is surprisingly not explicit about an impending launch). I noticed the unusual instance type running and thought it had been started by someone else in my group. They also had noticed it and thought I had launched it. I asked about it and we realized neither of us had intentionally launched it. That caused me to add a rule to the Config service to consider any type outside the usual 3 types that we launch to be noncompliant in the config report. I have not yet come up with a way to prevent the launch in the first place but that is on my list (IAM policy coming). Another interesting part of this is that since it was a managed service the user was listed as the root account even though it was due to me. This was because Batch created an autoscaling group of 0-1 with a desired state of 1 by default so be aware that ASGs might launch as well (particularly out of the normal workday). Note that some instances can run $13/hour. Also pay attention to what your ec2 account limits are. If they are in the 10s/100s you could run up a pretty good bill unexpectedly. You cannot set limits by individual instance types (at least directly). Now that I am much more familiar with Config I expect to be using it quite a bit more.

GitPitch for Presentations

I occasionally do presentations on technical topics and have never been able to settle on what format/tool to do them in (txt, pdf, ppt, md). I’ve tried a wide range of tools and recently have been trying to stick with using markdown for the content. Of course this means using something else to render for viewing.

I recently ran across GitPitch and decided to give it a try for my next presentation. GitPitch is an online tool that takes as an argument your Github or other repo and a markdown file called PITCHME.md at the top directory in it.

Assuming it finds that file it will use revealJS to render your markdown as a presentation. Page navigation is done using the left/right and up/down arrow keys. Right/left can be considered major topic slides and the up/down are slides associated with each major topic. This is as opposed to a simple sequential progression of slides.

There are two methods for viewing your presentation while working on it.

One is by using a local markdown editor, pushing the revised file to your repo and then viewing it via the GitPitch url.

The other is that you can download a zip file via the GitPitch url that contains the initial version of your presentation and includes the rendering related js and css type files. Then you unzip the file and run python in SimpleHTTPServer mode on the index.html found in the archive and use a browser to preview the content. After you update your markdown just reload the new version in the browser. This saves you having to do multiple pushes to your repo particularly while getting the hang of the GitPitch syntax and structure.

Once you are satisfied with your presentation you just send the updated PITCHME.md file to your repo and use the GitPitch url to present.

Something else to be aware of is that --- and +++ are used in your markdown for delineating pages. Using --- indicates a major slide change (left/right) and +++ indicates a sub-slide change (up/down) in you content.

So far I have also not been able to find a way to use a remote slide control device with it that includes the up/down actions in addition to the left/right actions while navigating the slides. I typically use the Logitech Wireless Presenter R400 and have not been able to remap the extra couple of buttons on it to act as up/down keys so far. Most remote slide control devices do a simple left/right action for slide changes which will continue to work but will just pass through the presentation in the typical sequential manner. I am pretty sure that I can find a device or phone app that will let me do all navigation actions eventually but no luck so far.

It is a little different to plan and prepare your presentation in a non-linear style when you are used to sequential slides. It is a good idea to plan your major topics and then each subtopic and then fill them in.

In general GitPitch is interesting and I like using revealJS style presentations but they do have changes in your approach that you need to be prepared for.

It does provide you with greater opportunities for creativity in presenting and can make it easier for someone viewing to get to the sections they are interested in more quickly than just paging all the way through it.

Docker for Tools

Something that I’ve become a strong believer in is using Docker containers to run tools rather than installing them (and all of their dependencies) on your machine. This is particularly true if you need to have access to multiple versions of a tool with Java being a good example.

What I mean by this is if you ran everything that you typically need as a container (from a Docker image) you should not need to install anything except Docker itself.

Over any period of time I need to use various versions of Java, Python among others. I used to keep each version locally installed and then go through a process of setting environment variables (such as PATH) to use what I needed at that moment. Instead, if I use the tools in Docker containers I can run them at will and never need to upgrade/downgrade or generally mess around with my machine.

Along with this is if you contribute any open source software (such as in Github repos) you should also include a Dockerfile that will run them. Then also consider creating a DockerHub repo that is set to automatically build Docker images for them. It is particularly important to include descriptive and usage information about your tool. The DH automatic build will convey the GH README.md info if you do. This makes it much easier for people to discover, develop trust in and use your tools. The perfect world is when you provide a Docker image of your tool and someone being able to simply run it without their having to clone, build (with whatever development environment is required), install and then run your tool.

Since most tools require access to local files include in your usage instructions an example of using the Docker volume argument to let your tool operate of local files. That might be as simple as something similar to the following docker run -v $(pwd):/tmp yourtool:1.0

Note that you should also include version tags for your tools so that if they change the user can still run what has always worked for them.

If you want to see an example of this approach take a look at the following. I do not claim to be a pro but I can carry a tune. https://hub.docker.com/r/dougtoppin/lenticular/

You may find that there are some things that you cannot easily do using Docker. A couple of examples that come to mind are tools that need access to external devices (such as via USB) or use a GUI. There may not be an existing way to do those with Docker but most everything else is likely to work well. However, companies and individuals are coming up with ways around those issues with Nvidia being a good example of that. They provide a Docker compatible driver that interfaces with their GPU. This allows containers to make use of very high performance found in that sort of hardware.

The main point is think about using and provide support for Docker. It will only make things easier for you and people that use your stuff.

Apple AirPods Update

I’ve previously passed along my opinions on my new(ish) Apple AirPods and have an update from a recent experience with them.

I used them in a somewhat noisier environment for the first time on the subway. Since they are not noise canceling and do not cover the ear, outside sound does degrade the experience somewhat. I found myself increasing the volume a couple of times to hear podcasts better. Still no real complaints with them but if you are regularly in a noisy environment you may notice it.

Apple iCloud Photos Experience

A couple of things on my experience in moving my entire photo library (maybe 60,000 pictures) into the iCloud (http://www.apple.com/icloud/photos/). Previously I had a large photo library and started keeping the oldest years on an external SSD (LaCie). The issue with this was that to get to old photos I had to switch iPhotos (now Photos) to it which could be a pain to find old pictures and Faces was per library. This also inhibited me from easily/quickly sharing old pictures. I was also concerned about loss so I had set up a synch system with AWS Glacier.

I decided to make the transition a couple of months ago and go into a single easily accessible storage and access system so I picked the Apple iCloud Photos. Doing that also lets you have a single location for all of your albums instead of multiple sets of albums in different libraries. Transitioning to it took several days of upload (from 2 different MacBook Pros because of the external library) but once that was done it has worked well so far. I also selected the reduced storage option for my iPhone 6 so it keeps all of the pictures in a lower resolution on the phone and the full sized images in the cloud. This has worked well so far but one thing that I do notice is that apps that can select photos take a bit longer to initially load the thumbnails because of the number of thumbnails to deal with. The reason that I had to use 2 MBPs was the iCloud Photos is linked to the device so having one MBP complete an upload and then switch to another library runs the risk of losing all of the previously uploaded pictures. This is because (I think) that switching libraries causes iCloud Photos to start a 30-day timer where it will expire the pictures not in the library that your MBP is using. If you use 2 different devices to upload iCloud will merge all of the pictures into a single library which then can be accessed from all of your Apple devices.

I also had to subscribe to a larger iCloud storage plan to hold all of the images. I feel like the convenience of access and automatic backup offsets the additional yearly cost.

Something else to note is that Google Photos has been in the process of uploading the photos from the phone for quite a while. It is automatically creating memories which is pretty interesting in that it uploads the most recently taken pictures from the phone and then keeps plugging away at the oldest pictures that it had not seen before because they were on the external drive. Because of this I regularly get “memories” of events that I have not seen in quite a while which I like.

So far I am satisfied with my experience with my only real complaint being the additional time that apps take to load thumbnails.

OpenShift and Build Error New (InvalidOutputReference)

I had a recent experience with troubleshooting a build problem with something in an OpenShift v3 project that I wanted to pass along. I did a number of searches on the error (InvalidOutputReference) and did not find the lack of an existing imagestream as a possible cause.

If your build status output includes the following

"New (InvalidOutputReference)"

It may mean that you need to create an image stream before the starting build with this

$ oc create is myapplication

It is possible that the build has failed but OpenShift will not realize it and will continue saying that the build is in progress.

If your build configuration has a section like this:

spec:
  nodeSelector: null
  output:
    to:
    kind: ImageStreamTag
    name: myapplication:latest

Then you should be able to do the following and get back an imagestream.

$ oc get is myapplication

If you can’t that may mean that you need to first create the image stream. Note that if your build is pulling from an upstream image it is possible that an imagestream is being created automatically.

OpenShift is a powerful container orchestration (and more) environment. You can find out more about the community version at https://www.openshift.org/ and the Red Hat supported verson at https://www.openshift.com/

Apple AirPods Review

This is my review of the new Apple AirPods. They are small wireless (bluetooth) earbuds that each fit in an ear and are not connected to each other by wire. Each has a battery which is supposed to last for 5-hours. Pairing them with my iPhone was automatic and immediate once I opened their holder. I only had to acknowledge the pairing on my phone. They come already charged and with a small holder that you slide them in for charging when not in use. When the holder is opened and the phone connects a popup window appears showing the charge level on the holder and pods. I do always put them back in their holder (and thus charging) when not in use. I have gone not the distance yet but have had them in for 2 or 3 hours at a time without any problem.

The bluetooth range is very good (from my iPhone 6) and I was able to leave my phone charging in the kitchen to go upstairs with no loss of connectivity.

They fit snugly in the ear and do not feel like they will easily fall out which is the primary concern that I have and have read about.

When just wearing them around the house and on my bike trainer I have not felt like they will fall out.

One did come out when I pulled a t-shirt off over my head. The other immediately stopped playing which should make it easier to recognize that one has fallen out and not leave it behind. When wearing a heavier jacket it may be possible for the collar to touch a pod causing it fall out but that has not happened to me. Something else that I have not tried yet is being out on the bike with a helmet. I am wondering if the helmet straps might make contact with a pod and cause it to fall out. I will eventually try that and update this when I know.

I have tried them on a several short runs and had no trouble with them falling out. In thinking back to running with earbuds connected by a wire they would typically be pulled out by me snagging the wire so that avenue should be closed now.

They have the same form factor as Apple earbuds and have no active noise cancellation. This means that they do not remove outside sounds but do reduce them to some degree. When walking outside with a moderate wind I am able to hear the audio from them.

One important thing to note is that they do attract attention and may make you feel conspicuous. I have had a suspicion a couple of times that someone has thought that I was wearing unusual earrings which feels a little odd.

So far after a few days of use (office, commute, grocery store and home) I have no complaints. More to come as I try them for longer periods of time.