Adjusting ThinkPad TrackPoint sensitivity on Ubuntu 16.04 (and others?)

I bought a Lenovo ThinkPad X1 Yoga a while back and have been using it as my "Everything" machine: dev workstation, personal stuff, entertainment device. I'm super happy with both the hardware and Ubuntu 16.04's driver support however I have run into one annoying issue... Well, I'm not sure if "annoying" is the right word for it as it resulted in an actual injury:

The mouse system config applet which ships with Ubuntu 16.04 is unable to adjust the machine's TrackPoint speed and sensitivity to values which I would consider usable. I found myself pushing on the TrackPoint so hard that I eventually started to have a fair deal of pain and swelling in the first joint of my index finger!

Normally one would adjust the TrackPoint responsiveness via the "Mouse and Touchpad" applet:

Screenshot of 'Mouse and Touchpad applet selection'

Screenshot of 'Mouse and Touchpad applet'

Notice that the applet has only a "Pointer speed" adjustment and not individual sliders for speed and sensitivity? I consider that a secondary problem. The primary problem is that even when that slider is laid over to the right the "speed" is not sufficient. As it turns out, adjusting the slider translates into numeric values which live in files in Linux's /sys filesystem. I think this should be consistent across X1 Yoga systems but in my case those files were:

  • /sys/devices/platform/i8042/serio1/serio2/speed
  • /sys/devices/platform/i8042/serio1/serio2/sensitivity

I didn't record the actual values but, from memory, the mouse config applet was injecting values in the range of 90 through 120's into the files where the min:max values are 0:255. You can "echo" new numeric values into these files to tune your mouse sensitivity to your liking like so:

Screenshot of adjusting mouse sensitivity from command line

However be aware of two things:

  1. This will often cause a momentary freakout of your mouse behavior. It takes a second or two for things to settle down.
  2. Changes made in the /sys filesystem do not persist across reboot so you're going to need to put these changes somewhere when they will persist.

According to this post over at the Ubuntu forums, one way to make these changes persist is by adding a config file in /etc/tmpfiles.d. Apparently systemd will re-apply settings placed here at every reboot. Here's the contents of my /etc/tmpfiles.d/trackpoint.conf:

w /sys/devices/platform/i8042/serio1/serio2/speed - - - - 255
w /sys/devices/platform/i8042/serio1/serio2/sensitivity - - - - 175

The Ubuntu forum post mentioned above suggest a reboot to have these systemd configs enforced but I found the following command to work and thus avoid the reboot:

$ sudo systemd-tmpfiles --prefix=/sys --create

If you are like me, your hand will thank you for making these adjustments.

Rancher Catalog Templates from Scratch : Part 1 - Catalogs and Templates

An edited version of this post also appeared on the Rancher Labs blog.

Rancher ships with a number of re-usable pre-built application stack Templates. Learning to extend these Templates or to create and share completely new Templates are two great ways to participate in the Rancher user community and to help your organization effectively leverage container-based technologies. Although the Rancher documentation is fairly exhaustive to this point documentation on how to get started as a new Catalog Template author has consisted of only a single high-level blog post: Building Rancher Catalog Templates.

This article is the first in a series aimed at helping the new Catalog Template author ramp-up quickly and with the best available tooling and techniques.

For the first article in the series we are going to construct a very simple, and not terribly useful ;), Cattle Catalog Template. In upcoming article we will flesh out this Template with more detail until we have a working multi-container NGINX-based static web site utilizing all of the basics of Rancher Compose, Docker Compose, and Rancher Cattle.

Overview & Terminology

Before we dive into the process of creating a new Rancher Catalog let's first get some common terminology out of the way. If you are an experienced Rancher user you may be able to just scan through this section. If you are new to the world of Linux containers, cluster managers, and container orchestration now is a good time to do some Google-ing.

For our purposes, Rancher is Open Source software enabling deployment and lifecycle management of container-based applications stacks using most of the commonly available Open Source container orchestration frameworks. As of the time of this writing Rancher has excellent support for Docker containers and the following orchestration frameworks:

  • Kubernetes ( 1, 2 )
  • Mesos ( 1, 2 )
  • Docker Swarm ( 1, 2 )
  • Rancher's own Docker Compose-based Cattle ( 1]

If your favorite framework isn't listed rest assured that support is probably on the way.

Within the context of each of the previously mentioned orchestration frameworks, Rancher includes a Catalog of prebuilt and re-usable application Templates. These Templates may be composed of a single container image but many times they stitch together multiple images. Templates can be fed environment-specific configuration parameters and instantiated into running application Stacks via the Rancher admin console. The screenshot below shows several applications stacks as viewed via the Rancher admin console. Note that the Wordpress and Prometheus Stacks are expanded to show the multiple containers which compose the Stack.

Screenshot of the available application Templates in the Rancher Cattle Catalog

Each orchestration framework ships with a set of pre-built Catalog Templates. In this article we are going to focus on just Rancher's own Cattle orchestrator. See the image below for examples of some of the many pre-built Catalog Templates which ship for Cattle.

Screenshot of the Rancher admin Console with running application Stacks

Creating your first Rancher Cattle Catalog Template

Many times these Templates can be used as shipped but sometimes you'll need to modify a Template (and please then submit your pull request to the upstream!) or perhaps even create a new Template from scratch when your desired application stack does not already have a Template.

Doing it manually

For the purposes of this exercise I'm going to assume you have:

  1. a container host running the rancher/server container.

  2. at least one compute node running rancher/agent (for the purposes of the demo #1 and #2 can be the same host)

  3. a configured Rancher Cattle Environment (available by default with a running rancher/server instance)

If that is not the case, please check out the one of the Rancher Overview videos on the Rancher Labs YouTube channel.

Adding a custom Cattle Catalog Template

By default, the Catalog Templates listed in the Rancher admin console are sourced from the Rancher Community Catalog repository. We'll create our own git repo as a source for our new 'demo app' Cattle Catalog Template. First for some setup of our working directory on our local workstation:

Screenshot of local workstation setup for 'demo' Template

Although there's no deep magic, let's step through the above:

  1. Create a project working directory named 'rancher-cattle-demo' under ~/workspace. These names and paths are fairly arbitrary though you may find it useful to name the working directory and git repo according to the following convention: rancher-<orchestration framework>-<app name>.

  2. Create the git repo locally with 'git init' and on GitHub via the 'hub' utility.

  3. Populate the repo with a minimal set of files necessary for a Rancher Cattle Catalog Template. We'll cover this in detail in a second.

Now let's do our initial commit and 'git push' of the demo Template:

Screenshot of initial git commit & push of our demo Template

For sanity's sake, you might want to check that your push to GitHub was successful. Here's my account after the above push:

Screenshot of initial push to GitHub

It's worth noting that in the above screenshot I'm using the Octotree Chrome extension to get a full filesystem view of the repo.

Now let's configure Rancher to pull in our new Catalog Template. This is done via the Rancher admin console under Admin/Settings:

Screenshot of Admin/Settings in Rancher admin console

Click the "+" next to "Add Catalog" near the middle of the page. Text boxes will appear where you can enter a name and URI for the new Catalog repo. In this case I've chosen the name 'demo app' for our new Catalog repo. Note the other custom Catalog settings from previous custom work.

Screenshot of adding a new Catalog repo

Now we can go to Catalog/demo app in the Rancher admin console and pull see the listing of container Templates. In this case, just our 'demo app' Template. But wait there's something wrong...

Screenshot of 'demo app' Catalog with incomplete Template

We've successfully created the scaffolding for a Rancher Cattle Template but we've not populated any of the metadata for our Template nor for the definition and configuration of our container-based app.

The definition of our application via docker-compose.yml and rancher-compose.yml is worthy or its own blog post (or two!) so we're going to punt that for now and focus on just basic metadata for the Template. In other words, just the contents of config.yml

Minimal config.yml

The Rancher documentation contains detailed information about config.yml. We're going to do just enough to get things working but a thorough read of the docs is highly recommended.

config.yml

The config.yml is primarily the source of metadata associated with the Template. Let's look at a minimal example:

---
name: Demo App
description: >
  A Demo App which does almost nothing of any practical value.
version: 0.0.1-rancher1
category: Toy Apps
maintainer: Nathan Valentine <nathan@rancher.com|nrvale0@gmail.com>
license: Apache2
projectURL: https://github.com/nrvale0/rancher-cattle-demo

In case it wasn't evident from the filename, the metadata is specified as YAML. Given the above YAML and the Icon file present in this git commit, let's look at the new state of our Template:

Screenshot of improved demo Template

That's starting to look a lot better but out Catalog Template still doesn't do anything useful. We'll cover how to define our application stack in the next blog post in this series. (HINT: It involves population of the docker-compose.yml and rancher-compose.yml files.)

A better way to create Templates

Before we move on to definition of our application, I need to tell you a secret...

While creating new Catalog Templates manually, as we have done above, doesn't require any deep magic its easy to make a small and silly mistake which results in an error. In fact, in going through the process while writing this post I made several small mistakes which resulted in having to repeat the process for generating screenshots. It would be excellent if there were tooling which would allow us to create new Catalog Templates in a fast, repeatable, low-probability-or-error way...and in fact there is. The Rancher community has submitted a Rancher Catalog Template 'generator' to The Yeoman Project. Assuming you have a working Node.js environment, generating a new Cattle Catalog Template with default scaffolding is as simple as the process show below:

Animated GIF of creating a new Cattle Catalog Template with Yeoman

Setting Up DH + TLS-based VPN With OpenVPN on a Tomato-based Home Router

I recently spent the better part of an afternoon trying to get TLS and Diffie-Hellman VPN working with OpenVPN on a Tomato-based router. This is farily aggressive VPN configuration in terms of protecting the data stream; see the OpenVPN docs for benefits of this setup. Despite the OpenVPN and Tomato folks providing pretty decent documentation, the process was less than straight-forward, mostly due to the relative lack of documentation from the Ubuntu side, and so I thought I would share the details both for folks who might be struggling with similar and also for posterity. I expect I’ll have to do this again at some point. ( Sure would be nice if the Open Source router folks provided REST APIs for this kind of thing so this stuff could be automated with cfgmgmt tools… Perhaps I should stop complaining and work out a pull request. ;) )

FWIW, I advise keeping the CA data, client config, and router configs in a private git repo for safe-keeping and rollback purposes. I sometimes fat-finger a small detail and hate having to deal with certificate revocation. Much easier to just version control the whole thing and rollback to an earlier commit of the CA data when I screw something up. Also, sometimes SOHO-grade routers die and its pretty easy to just upload an old config when you get a new router than re-configuring everything by hand.

That piece of advice out of the way, let’s get started…

Creating the crypto artifacts

First, let’s start with the setup of the required crypto artifacts. Thankfully, the OpenVPN documentation for the ‘easy-rsa’ tooling is quite good. Make sure to generate the following:

a CA cert a “server” cert for your Tomato-based gateway/router a “user” cert and key for each user you would like to have access to the VPN a Diffie-Hellman static key generated with the ‘openvpn’ utility and the ‘–genkey’ and ‘–secret’ options

Setting up the server-side of the OpenVPN TAP

The Basic tab

Once you have each of the above crypto artifacts you are ready to configure the server-side of the VPN. I’m assuming you have some experience with Tomato and I don’t need to walk you through logging into the web-based interface to Tomato. In my case, and in the screenshots below, I’m using the Shibby version of Tomato with a custom theme and running on a NETGEAR Nighthawk R7000. As far as I know none of those things make for any discernible difference for anyone using Tomato on different hardware and the theming is cosmetic only.

When you are logged into the router’s web-based admin you’ll find the VPN configuration via the left-side menu under the “VPN Tunneling” section. Screenshots follow for each of the relevant configs tabs under “Server 1”:

OpenVPN Basic settings

A couple of things to note in the above image:

First, the tls-auth option in the dropdown at the bottom of the screen allows 3 values: Bi-directional, Incoming(1), and Outgoing(2). You’ll want to choose either Incoming(1) or Outgoing(2) and make sure to choose the opposite when setting up the client side of the VPN tunnel.

Second, I’ve chose to use a tunnel type of “TAP”. This means the VPN connection will be at Layer 2 (as opposed to “TUN” which is Layer 3). A Layer 2 tunnel will make the client appear to be directly on the LAN at the other end of the tunnel which is the less efficient approach in terms of minimizing the amount of traffic across the tunnel but also is the most functional in that it makes it easier for your client side to access nodes on the other end of the tunnel using various broadcast protocols like Avahi/Zeroconf and Bonjour.

Third, notice I’ve turned off DHCP for tunnel addresses. I’ve never been able to get DHCP working over a Tomato VPN tunnel. Instead I split the IP ranges on the LAN side into non-overlapping ranges with some static and some DHCP-managed. For instance, I have the following:

  • LAN network of x.x.x.0/24
  • x.x.x.1-10 - non-DHCP and reserved for use of networking devices or other odds-and-ends which might not support DHCP
  • x.x.x.11-100 - statically-assigned nodes (in my case primarily for Openstack Floating IPs)
  • x.x.x.101-149 - IP address for VPN tunnel termination
  • x.x.x.150-254 - DHCP addresses

Ok, on to the Advanced tab.

The Advanced tab

OpenVPN Advanced settings

A few things to call out for the Advanced tab…

  • I did a fair amount of research as to the best setting for ‘Encryption cipher’. In the end, ‘AES-256-CBC’ seemed to offer the best protection with the greatest likelihood of also being supported on any clients which might be connecting.

  • I believe the option for ‘Allow User/Pass Auth’ was checked by default. Since we are building out a CA where each user will have their own cert and key there’s really not a good reason to allow u/p auth and open up the possibility of someone grinding that data or somehow getting access to user credentials and accessing the VPN.

  • You might choose to turn the ‘Direct clients to redirect Internet traffic’ on if you’d prefer that clients not do split tunneling.

  • Because I’m sending DNS settings to VPN clients my internal node names are resolvable when a client is connected to the VPN. Kinda nice.

The Keys tab

OpenVPN Keys settings

The Keys tab is fairly self-explanatory; just copy/paste the cert and key information from the CA you generated in the first section of this blog. A couple of things to note here:

  • Make sure you don’t include anything but cert data in the server cert field. The OpenVPN docs are very clear on this matter but it is easy to overlook.

  • The Diffie-Hellman static symmetric key field is fine with or without the included comments.

Wrapping up the server-side config

At this point the server side of the VPN tunnel should be ready but, as we are about to see, that doesn’t mean that things will go smoothly on the client side. It’s a good idea to get familiar with the Status/System Logs and Administration/Admin Access of the Tomato web admin. The System Logs section will allow to see the router’s syslog in a web brower. In Administration/Admin Access you can enable SSH-based access to the router which then allows you to login to the router and use ‘tail -f’ on /var/log/syslog. Both are supremely useful if you need to do any troubleshooting of the VPN tunnel setup and tear down.

Seting up the client-side of the OpenVPN TAP

The process of configuring the client-side of the OpenVPN tunnel will vary wildly depending on the client OS. I’ll demonstrate the process for Ubuntu 15.04 using the GNOME desktop and the bundled config tools for NetworkManager. We’ll start at NetworkManager/Edit Connections/Add/OpenVPN. That brings up the first screen(see below) where we’ll set the NetworkManager connection name and, more importantly the hostname/IP address of the VPN tunnel device.

OpenVPN top-level

Things really start to get interesting once we click the ‘Advanced’ button in the lower right-hand corner which takes us to the General tab where we set the tunnel type to ‘TAP’ and enable LZO compression.

OpenVPN General tab

In the next tab, Security, we’ll select the cipher type which we set on the server-side, AES-256-CBC, and enable the HMAC Authentation by selecting ‘SHA-1’. Note that nowhere in the OpenVPN docs, at least not anywhere I can find it, is it mentioned that the DH key we generated is of type SHA-1.

OpenVPN Security tab

Lastly, we set a few things in the TLS Authentication tab. Namely, find the DH key in your CA repository and set the Key Direction to the opposite value of what you set server-side. For the purposes of this post that would mean setting this value to ‘1’.

OpenVPN TLS tab

The Subject Match and certificate matching are optional but probably not a bad idea as an extra measure.

Troubleshooting

As I mentioned previously, I find it very useful to be able to login to the router tail the syslog output:

# tail -f /var/log/syslog

Similarly, assuming the client-side is Ubuntu 15.04 which uses systemd, you can get the client-side logs via:

$ sudo journalctl -f

The error messages spit out by NetworkManager and OpenVPN are actually quite useful and accurate.

A Small Addition to Deploy to Noop

Just a quick update to my previous post about the ‘Deploy to Noop’ technique for Puppet.

In the original version of the code in the our demo environment we had something like this in our manifests/site.pp:

filebucket { 'main':
  server => $::servername,
  path   => false,
}

File { backup => 'main' }

$force_noop_real = str2bool( pick( $::force_noop, hiera('force_noop', 'true') ) )
if $force_noop_real {
  notify { "Puppet noop safety latch is enabled in site.pp!": }
  noop()
}

hiera_include('classes')

As a reminder, the key bit is the callout to Hiera to resolve the value of the ‘force_noop’ variable. If it resolves to anything other than the boolean value of False we will enable the noop safety latch via the if statement. As you probably recall, the noop() function will force all resources in the current scope and all children scopes into noop or tell-me-what-you-would-do-but-don’t-actually-do-it mode. Resolving the state of the noop safety latch from Hiera allows us to selectively disable/enable the noop safety depending on whatever layers we have defined in our Hiera router: ex: datacenter, cloud, osfamily, etc.

Some folks asked how we could not only allow twiddling of noop safety from the Puppet Enterprise Console but do so in a way such that the Console was authoritative even over what was embedded in our Hiera data. A common reason for asking for this functionality was to allow a non-Puppet-programmer to very easily halt convergence runs across an environment when Puppet code had been promoted and later determined to have some problem. Yes, you could just promote fixed code up through your automated deploymnet pipeline (Oh, you don’t have one of those? ;)) but this is very quick and very easy. So here’s the entire manifests/site.pp with the code to allow such a thing:

filebucket { 'main':
  server => $::servername,
  path   => false,
}

File { backup => 'main' }

$force_noop_real = str2bool( pick( $::force_noop, hiera('force_noop', 'true') ) )
if $force_noop_real {
  notify { "Puppet noop safety latch is enabled in site.pp!": }
  noop()
}

hiera_include('classes')

Line 8 above has most of the interesting action. It’s kind of a jumble but the end result is pretty straight-forward. Let’s start at the inside and work our way out.

  1. First, check Hiera for the value of ‘force_noop’. If no value for ‘force_noop’ is resolved from Hiera data we will default the value to the 2nd parameter of the function call: in this case Boolean ‘true’.

  2. Next, use the puppetlabs/stdlib function pick() to check the value of the top-scope variable ‘force_noop’. If the value has already been set then pick() will return that value. Otherwise it will return the value we generated in step #1.

  3. Lastly, whatever the value of the previous two expressions, we cast the results to a boolean value. That is simple because it allows us to replace the ‘unless’ conditional in Line 9 with a more vanilla, and probably more easiliy understood, if statement.

At the end of the day, the only substantive change we’ve made to allow the noop safety latch to be twiddled from the Puppet Enterprise Console is the pick() function call. The rest is just housekeeping.

Happy Puppeting!

.gitignore to allow 'empty' directory

I've had a couple of situations where I wanted to be able to commit an empty directory to a git repo. Git doesn't allow you to add+commit empty directories so the usual way around this is to include a .gitinclude file in the 'empty' directory -- thus no longer empty.

$ mkdir /tmp/foo && cd /tmp/foo
$ git init
Initialized empty Git repository in /tmp/foo/.git/
$ mkdir bar
$ git add -A -n
$ touch bar/.gitinclude
$ git add -A -n
add 'bar/.gitinclude'

Okay, that seems better.

But in my case I would like 'bar' but not any subdirectories. Let's see how that shakes out:

$ mkdir bar/baz
$ git add -A -n
add 'bar/.gitinclude'

That looks about right -- BUT IT ISN'T! Git isn't including 'baz' because baz is a directory with no content.

$ touch bar/baz/somefile
$ git add -A -n
add 'bar/.gitinclude'
add 'bar/baz/somefile'

Well, crap. Hows about I just add a .gitignore into 'bar':

$ echo '*' > bar/.gitignore
$ git add -A -n

Uh, now we don't get anything on an 'add'?!?

Here's the right way to get an 'empty' 'bar' with no subdirectories whether those subdirectories have content or not:

$ rm bar/.gitinclude
$ (cat < ) > bar/.gitignore
$ git add -A -n
$ add 'bar/.gitignore'
$ tree
.
└── bar
    └── baz
        └── somefile

2 directories, 1 file

That took way to long to figure out! :\ Here's a repo if you don't believe it worked:

nrvale0/gitinclude_tricks

Turning the Brownfield Green - aka Puppet and "Deploy to Noop"

This is not my beautiful house!

I'm going to tell you a story. You are the protagonist in this story and you are in TechOps or perhaps you are a Developer.

There are things you enjoy about your work: you enjoy learning and mastering new technoloigies, you get satisfaction from knowing that the systems you design are resource efficient and reliable, and lastly you enjoy using your skills to solve problems with your team members and for your employer. Because you enjoy these things you spend lots of time, considerable free time even, keeping up with the state-of-the-art and you are excited about the possibilities surrounding this nebulous term DevOps and all of the exotic crash-landed-in-the-deserts-of-New-Mexico technology which has been flowing out of companies like Spotify, Google, Facebook, and Twitter in the last couple of years.

There are also some things you don't like about your work: drifting project requirements, chronically late project delivery, the amount of effort expended propping up poorly implemented legacy systems, and the sheer amount of time you spend cutting firebreaks in the infrastructure for problems which could be avoided if only your team had the time to rampup on better, more modern, processes and tooling.

What are your options?

Well, you could sell everything, use that money to buy a Westie campervan and The Canon of DevOps and embark on cross-country odyssey, stopping to learn multi-pitch sport climbing in Yosemite while dropping your body-fat to 9%, and ultimately ending up many months later at the doorstep of a San Francisco startup as a lean, mean, and freshly-annointed Squire of DevOps. Or you could quit being a flake and triage the most pressing issues at your current org and strategize how you might go about fixing them. ( Also, we have plenty of people like that in the Bay Area. Please don't do that! ;) )

After some reading and lot of thinking you come to the conclusion the first problem which must be resolved is eliminating the multi-week, error-prone, and unreproducable infrastructure deployments which undergird your organization's applications. One Thursday evening over beers you and your two best friends from TechOps pronounce to those within earshot

"No longer shall we hand-configure servers! No longer shall we go without service monitoring! Now and forevermore we shall Puppetize!"

And then you are kicked out of Applebee's for being dorky.

And now I shall sing to you the song of my people.

Getting kicked out of Applebee's turns out to be a boon (in so many ways). In your somewhat buzzed bravado your Confederation of Rogue TechOps immediately downloads Puppet Enterprise and each member burns through the courses in the Puppet Labs Workshop. You learn how to deploy and configure raw new nodes with Razor, how to leverage pre-written modules posted to the Puppet Forge, how to structure your code and classify your nodes with the Roles and Profiles pattern, and, perhaps most importantly, how to test and promote Puppet code using Puppet environments and r10k. On the last day you learn to coordinate multi-node deployments using a clever combination of MCollective and dalen/puppetdbquery and it is good. Your Google Compute Engine cloud is a crystal palace and when the wind blows through the spires it resonates in the frequency of DevOps. And it is purple.

You take what you have learned back to the office and the next application infrastructure delivery proceeds from greenfield to production-grade with only minor bumps and ahead of schedule and in another fit of bravado you announce to the CTO, "We shall now Puppetize All The Things! I shall now have root on the AIX nodes!" at which point you are bludgeoned from behind by the head of the Change Control Board. When you awake you are in your cube and duct-taped to your chair. Your keyboad is out of reach. The world is spinning. Just as you lapse back into unconsciousness you hear the head of the CAB say to your CTO,

"And how, exactly, are we supposed to know what Puppet is changing on these systems! Does it integrate with our deployment runbook wiki? No!"

Your last thought before the world goes dark, "I may have overreached. If only Puppet supported a way to view pending system changes without actually enforcing them(1,2,3)! And if only I had a way to toggle that functionality from code!"

We might even call it "Deploy to Noop".

Deploy to Noop

That thing you have envisioned...it's really really easy to implement! Let's start simple and work our way up to something which could be deployed in production.

If you haven't already you should first read up on Puppet's 'noop mode':

Now that you understand Puppet's noop mode, I want to point you to a little-known but super-useful Puppet Forge module: trlinkin/noop. The thrust of it is this:

Calling the function 'noop()' bundled with trlinkin/noop puts all resources in the current scope and all child scopes into noop mode.

If your site.pp looks like so:

/etc/puppetlabs/puppet/manifests/site.pp

noop()
node default {
  notify {'Hello, World! I think you are going to see a noop here --->':}
}

you should then see the notify resource 'enforced', not really enforced, in noop:

$ sudo puppet module install trlinkin/noop
Notice: Preparing to install into /etc/puppetlabs/puppet/modules ...
Notice: Downloading from https://forge.puppetlabs.com ...
Notice: Installing -- do not interrupt ...
/etc/puppetlabs/puppet/modules
└── trlinkin-noop (v0.0.2)

$ sudo puppet apply /etc/puppetlabs/puppet/manifests/site.pp
Notice: Compiled catalog for ensenada.nathanvalentine.private in environment production in 0.07 seconds
Notice: /Stage[main]/Main/Node[default]/Notify[Hello, World! I think you are going to see a noop here --->]/message: current_value absent, should be Hello, World! I think you are going to see a noop here ---> (noop)
Notice: Node[default]: Would have triggered 'refresh' from 1 events
Notice: Class[Main]: Would have triggered 'refresh' from 1 events
Notice: Stage[main]: Would have triggered 'refresh' from 1 events
Notice: Finished catalog run in 0.49 seconds

So that's pretty interesting! But the noop() call in our site.pp currently sets noop at global scope and therefore turns off enforcement for all resources in all node statements. We have to step back and think about the problem we are trying to solve.

Because we are developing a Puppet codebase and we have a brownfield environment, we would really like to default all nodes to noop and then selectively enable enforcement on a sample set of a particular server role. For instance, Noop All The Things except for web servers which are not in production. We iterate over that process and eventually every node in every role in our environment has transitioned from noop-Puppet to Puppet runs enforcing our well-smoketested-on-production-nodes codebase!

What if there were a tool that allowed us to set top-scope variable values in Puppet code based on Puppet Facts? Oh!! Hiera!! Let's modify our site.pp like so:

/etc/puppetlabs/puppet/manifests/site.pp:

# Force noop mode unless specified otherwise in hiera
$force_noop = hiera('force_noop')
unless false == $force_noop {
    notify { "Puppet noop safety latch is enabled in site.pp": }
    noop()
}

# Allow hiera to assign classes
hiera_include('classes')

node default {
  notify { "The value of 'force_noop is ${force_noop}": }
}

and then we'll set up Hiera to reference YAML files and inject a layer in our Hiera router for 'certname':

/etc/puppetlabs/puppet/hiera.yaml:

---
:backends:
  - yaml

:yaml:
  :datadir: "/etc/puppetlabs/puppet/data"

:hierarchy:
  - "clientcert/%{::clientcert}"
  - common

/etc/puppetlabs/puppet/data/common.yaml:

---
force_noop: true

and the clientcert YAML for the nodes in our environment:

$ cd /etc/puppetlabs/puppet/data/clientcert
$ tree clientcert 
clientcert
├── master.yaml
└── ubuntu1204-0.yaml
$ cat clientcert/master.yaml clientcert/ubuntu1204-0.yaml 
---
force_noop: false
---
force_noop: true

To break it down, the YAML above states:

  1. our Puppet master, named 'master', will run in enforcement mode
  2. our agent node, named 'ubuntu1204-0.yaml', will run in noop mode

Now, if you understand Hiera, you should see our ability to sift-and-sort nodes for enforcement or noop is only limited by the routing logic in our Hiera router. It's worth noting that the 'force_noop' top-scope variable can also be set from:

  1. Hiera backends other than YAML. Perhaps something like hiera-http?
  2. an External Node Classifier or the Puppet Enterprise 3.4 Node Classifier Puppet App
  3. by editing Puppet Enterprise Groups and Nodes in the PE Console.

Some of you are probably saying 'Nifty!' and others of you are probably saying, "What the hell just happened?!?!". I have good news...I've put together a self-contained Vagrant environment which you can use to play with Deploy to Noop until it clicks. You can find the environment here:

http://github.com/nrvale0/deploy-to-noop-part1

"This is flipping cool and I'm going to run off and put it into Production immediately!"

Awesome. Glad you like it. I've used this technique at a number of customer sites and I think it is super-useful. There are a few caveats:

When you are making changes to your Hiera data to turn off the noop safety latch on a node or set of nodes, you'd best understand how to either manually query and verify the results from Hiera, or even better, know how to wire up your CI system / deployment orchestration system to test that you aren't accidentally turning off noop in places you don't expect. Here's an example of how to query Hiera from the master command-line assuming the Hiera data and router config shown above:

$ hiera -c /etc/puppetlabs/puppet/hiera.yaml force_noop ::environment=production ::certname=ubuntu1204-0
false

Also, noop agent runs have the potential to generate rather large Report Logs every time the agent runs. "These are the things I would like to change but I cannot because I am in noop mode!" If your Puppet master is PuppetDB-integrated (default on Puppet Enterprise) this can result in your PuppetDB consuming quite a bit of storage. PuppetDB data has a TTL so as you disable the noop safety for more and more of your nodes the amount of PuppetDB storage required (assuming a constant number nodes and a steady agent run interval) should decrease. I've had clients who had thousands of nodes in noop mode for months at a time and they have PuppetDB databases in the 100's of GBs range. Be prepared for this when you are laying out the storge for your Puppet master/PuppetDB server.

I'd also encourage you to move your nodes into enforcement as rapidly as possibly. What's the point of writing Puppet code if you never allow Puppet to converge the configuration of the nodes?!? This is meant as a temporary hack to allow you to get Puppetizing quickly before you have....dum dum dum...developed a full CI-integrated service delivery/deployment pipline. But that's a topic for another blog! ;)

Conclusion

Feel free to email me if you have questions. Pull Requests and bug reports are welcome and encouraged!

Happy de-brownfielding!

Android "Insufficient storage" fix

So lately I"ve been having intermittent problems when upgrading Android apps on my HTC One running CyanogenMod; the dreaded "Insuffucinent storage" nonsense. I'm not going to pretend I've spent the time to dig into this issue to find the root case but I will tell you which of the numerous suggested work-arounds have been successful in my case. For instance, in the case of the AirBNB app:

(of course the usual #include <disclaimer.h> applies)

$ adb shell
shell@m7:/ $ su -
root@m7:/ # cd /data/app-lib
root@m7:/data/app-lib # find ./ -type f -ipath '*airbnb* -iname '*.so' -exec rm -f {} \;

and then make another pass at upgrading the app from Google Play.

If the update still fails then you can go Big Hammer with the following command:

$ adb shell
shell@m7:/ $ su -
root@m7:/ # cd /data/app-lib
root@m7:/data/app-lib # find ./ -name '*airbnb*' -exec rm -rf {} \;

Note that this often resets the data for the app such that you may have to login to the app again.

Android KitKat and the Android Debug Bridge RSA Key

I recently ran into the dreaded "Insufficient storage available" problem when trying to reinstall the Google Drive app on my HTC One running Cyanogenmod's spin on KitKat. In the usual shaving-the-yak fashion the proposed fix for this problem (fixes in previously-linked article did not resolve :() required an upgrade of ADB and the Android SDK. As it turns out KitKit now requires a click-to-accept on your phone when you've connected to USB debugging with an unrecognized RSA key. This is news to me in a number of ways as I didn't even know there was an RSA key built-out as part of ADB debugging but, regardless, my phone was not prompting in any way about any keys. It took about 20 minutes to figure out the following series of steps to get ADB working:

  1. Settings/Developer Options/Revoke USB debugging authorization
  2. Settings/Storage/USB computer connection(top-right of the screen for me)/Media device(MTP) = checked
  3. Optionally, Settings/Android debugging = unchecked and then re-checked

Now, where were we....oh, yes, Google Drive...

Puppet noop via Resource Defaults and Resource Collectors

Overview

In a previous article I wrote about Puppet's noop mode which allows you to compare a node's current config state against the state specified in Puppet code without actually making any changes to the node. This is a super useful low-risk way of testing your Puppet code on production nodes and ensuring you don't break anything while testing. (Of course it is always useful to have infrastructure for automated testing but that is another discussion for another time.) I also wrote about the three commonly understood ways of invoking noop mode:

  1. via the 'noop = true' setting in the '[agent]' section of puppet.conf
  2. via 'puppet agent -t --noop' and 'puppet apply --noop <puppet code>'
  3. via 'noop => true' specified as a metaparameter of a Puppet resource

In the first two examples above our noop mode is all-or-nothing; noop will be enforced for every resource Puppet is managing on our node. That provides the ultimate in safety but there are scenarios where we'd like to enforce some resources and not others. The third option above provides the abilty to specify noop per-resource in our Puppet DSL code and while that is useful we certainly don't want to have to add that code to every resource we are testing.

In this article we are going to look at two additional ways to specify noop for some subset of Puppet-managed resources:

  1. noop via Resource Defaults
  2. noop via Resource Collectors

noop via Resource Defaults

For every resource type available in our Puppet code, whether that be via the core resource types or those added to our codebase via a Puppet module, we can use Resource Default syntax to specify default attribute values. Let's look at some example code:

file { '/tmp/file1':
  ensure => file,
  owner => 'root',
  group => 'root',
  mode => '0644',
  nope => true,
}

file { '/tmp/file2':
  ensure => file,
  owner => 'root',
  group => 'root',
  mode => '0600',
  noop => true,
}

You might think the code above would create the files '/tmp/file1' and '/tmp/file2' and you would be close to correct. Note the 'noop => true' on both file resources. Puppet is going to check whether these files exist with the specified attributes for ownership, group, etc and then if there is a misalignment between the state of the files on the system and the state specified in our code it is going to tell you what it would like to, but cannot, change due to the noop metaparameter values. Also, it would seem there is probably an opportunity for some code compression; we have specified identical values for attributes like 'owner' in both of the file resources. Let's see an example of Resource Defaults which knocks out both of those issues:

File { owner => 'root', group => 'root', mode => '0644', noop => true, }

file { '/tmp/file1':
  ensure => file,
}

file { '/tmp/file2':
  ensure => file,
  mode => '0600',
  noop => false,
}

In the code above the "File {..." line sets a series of defaults for all file resource types in the current scope. For example, unless otherwise specified/overridden, all files created in the current scope will have a mode value of '0644'. As you can see, the '/tmp/file2' file resource above actually overrides both 'mode' and 'noop' values. Assuming I have saved the above code in /tmp/foo.pp, here's the output of a 'puppet apply' run:

$ sudo puppet apply /tmp/foo.pp
Notice: Compiled catalog for testnode.private in environment production in 0.12 seconds
Notice: /Stage[main]/Main/File[/tmp/file1]/ensure: current_value absent, should be file (noop)
Notice: Class[Main]: Would have triggered 'refresh' from 1 events
Notice: Stage[main]: Would have triggered 'refresh' from 1 events
Notice: Finished catalog run in 0.31 seconds
$ ls -l /tmp/file* 
-rw------- 1 root root 0 Apr 15 07:17 /tmp/file2

Hopefully no great surprises there. :)

Because the Resource Default syntax is available for all resource types, you might also do the following kinds of things:

User { managehome => true, noop => true, }
Package { provider => 'gem', noop => true, }
Service { ensure => 'running', noop => true, }

While there is some value in being able to specify noop for lots of resources in one fell swoop, there are a couple of limitations with the above approach:

  • First, the Resource Defaults only apply in the current scope and child scopes. Since 90% of the Puppet code you write will live in a module with its own class-based namespace and associated scope the impact of your Resource Defaults will be limited. That is a good thing and a bad thing. You could go into site.pp and specify Resource Defaults at global scope but then you are tweaking defaults for all resources on the node and that introduces the possibility for all kinds of odd interactions which you'd probably rather avoid. Just as you would be careful doing things in global scope in other languages you should be careful doing things at global (site.pp) scope in Puppet.

  • Second, if you wanted to enable noop mode for all Types using Resource Defaults you'd have to explicitly do so for every Type in your Puppet codebase. Until the Puppet DSL supports some form of introspection (and maybe not even then) it isn't possible to know every Type available in your codebase. Even doing so for the core types, of which there are several dozen, would be a lot of ugly boilerplate code and it still wouldn't give you coverage for custom Types which come bundled with a Forge module. you'll have to include a Resource Default statement for every resource type

So, at best, it probably only makes sense to use Resource Defaults for enabling noop mode at the class/module level.

Wouldn't it be nice if we could somehow query the list of managed resources from within the Puppet DSL and set noop where appropriate? Read on!

noop via Resource Collectors

Up to this point I've written about "resources managed on the node" and been very careful to not explore that concept at any greater depth. Time for more Puppet fundamentals...

It is a common misconception that during a Puppet agent run the agent actually downloads the Puppet code from the Puppet master. Other configuration management systems often work that way but the design of Puppet is such that instead the Puppet master 'compiles a catalog' which is essentially a compressed JSON document which contains all of the details of the resources to be managed on the requesting node. That catalog is downloaded by the agent and passed to a layer in the Puppet agent called the Resource Abstraction Layer which is responsible for converting the catalog into the desired node config state. There are some constructs in the Puppet DSL that allow us, as one of the last stages of catalog compilation, to manipulate the resources in the catalog.

In this case I'm referring to [Resource Collectors. Let's take a look at some examples of Resource Collectors.

Here's a simple resource collector to force all Package resources on the system to noop mode:

Package <| |> {
  noop => true,
}

This means all Package resources in the node's catalog will be noop'ed and thus the Puppet agent will report packages it would like to change but will not actually make those changes. A note for those who might be a little more advanced in their Puppet usage, know that Resource Collector operations are parse order independent.

There are scenarios where we might want to mark just specific packages for noop as opposed to the entire set of packages in the catalog. We can pick out a subset of packages using Resource Collector search expressions.

Here's an example of marking the 'kernel' package noop:

Package <| title == 'kernel' |> {
  ensure => latest,
  noop => true,
}

so if the 'kernel' package is in our catalog and a newer version is available in our repositories we'll get a entry in the agent run report indicating as such but Puppet will not, perhaps thankfully, take it upon itself to install the new package.

Here are some additional examples of Resource Collectors using search expressions:

Package <| title == 'openssl' or title == 'openssh-server' |> {
  ensure => latest,
}

File <| mode == '0777' |> {
  mode => '0644',
}

Service <| title == 'selinux' or title == 'rtools' |> {
  ensure => stopped,
  enable => false,
}

though some of these are represent questionable practices in actual production. ;)

Resource Collectors, especially when paired with search expressions, give us a lot more flexibility to pick out catalog resources for noop mode but they suffer a similar limitation to Resource Defaults in that if we were to try to use them to noop all resources in the Catalog we'd have to include a statement for each Resource Type in our catalog and we can never know the set of all available Resource Types in our codebase without some clever introspection capabilities not currently present in the DSL.

An Important Wrinkle with Resource Collectors

There's a very important, and surprising, wrinkle you should be aware of when you consider a Resource Collector; if you were to do something like this:

User <| |> {
  ensure => present, 
}

that seems rather innocuous. "For every user in my catalog ensure that user is present.' At worst, you might override some code somewhere else that specified a particular user should be 'ensure => absent', right? Well, there's a corner case involving Virtual Resources. If you collect a Resource Type which has Virtual Resources in the catalog you may very well instantiate those resources. Please read up on Virtual Resources before using the Collector.

Let's Review

We've covered quite a bit of ground in the last two posts so let's step back for a second to get our bearings before we go on to the next article. We've talked about:

  • the basic concept of the noop mode in Puppet
  • basic ways to invoke noop mode acrossed an entire Puppet enforcement run:
    • puppet agent -t --noop
    • puppet apply --noop /tmp/foo.pp
  • a handful of ways to invoke noop mode for individual resources (metaparameter 'noop => true')
  • invoking noop acrossed a limited set of resources:
    • Resource Defaults
    • Resource Collectors

The end goal of all of this noop cleverness is to allow us to test our Puppet code on production nodes and catch any issues which might have snuck through earlier testing with non-production nodes. We want the ability to not only do coarse-grained/whole-catalog noop testing but also the ability to pick and choose which parts of our codebase and managed resources are in full enforcement or noop mode.

In the next post we'll tie all of this together and start our investigation of the techniques I call "deploy to noop" which allows us to start picking out particular parts of the codebase but parts of our codebase for partciular sets of nodes in our production network.

The Basics of Puppet "noop mode"

Overview

There are a few solutions out there for configuration management and server orchestration these days but over the next few articles I'm going to talk about an oft-overlooked feature unique to Puppet called "noop mode". If you are old-hand with Puppet please be patient. I'm going to start with the basics of noop mode and build to some advanced deployment techniques I collectively refer to as "deploy to noop" which greatly reduce guesswork and hand-wringing when doing brownfield Puppet and Puppet Enterprise deployments in large, multi-platform, complicated environments.

A Quick Intro to Puppet

For those who might not be familar with Puppet, it is a programming language and toolchain for managing the configuration of the nodes in your computing infrastructure. Traditionally this has meant UNIX-like operating systems but as of the last couple of years that has expanded to include MS Windows systems and even network and storage devices. Instead of logging into these nodes and hand-configuring nodes we instead choose to manage nodes via the Puppet Domain Specific Language(DSL) and a server-client ( master-agent in Puppet parlance ) architecture which centralizes and automates the enforcement of our nodes' desired state as specified in our Puppet code. The process of taking a node from current config state to our desired state is called "convergence". In the example below I show some Puppet code which lives in my Puppet master's cached copy of my Puppet codebase and which I've chosen to have enforced on my Ubuntu workstation:

user { 'testuser':
  ensure => present,
}

package { 'vim-haproxy':
  ensure => installed,
}

The above code will converge the state of my node such that at the end of a Puppet agent run I should have a user called 'testuser' and the package 'vim-haproxy' will be installed. Let's see the output of a manual verbose Puppet agent run:

$ sudo puppet agent -t
Notice: Compiled catalog for testnode.private in environment production in 0.75 seconds
Notice: /Stage[main]/Main/Package[vim-haproxy]/ensure: ensure changed 'purged' to 'present'
Notice: /Stage[main]/Main/User[testuser]/ensure: created
Notice: Finished catalog run in 7.10 seconds

We can manually check our new state against our desired state with the usual UNIX command-line tools:

$ id testuser
uid=1004(testuser) gid=1007(testuser) groups=1007(testuser)
$ dpkg -l vim-haproxy
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name           Version      Architecture Description
+++-==============-============-============-=================================
ii  vim-haproxy        2:7.4.000-1u amd64        Vi IMproved - enhanced vi editor 
$ id testuser2
id: testuser2: no such user

or with Puppet toolchain itself with the 'puppet resource' command where 'puppet resource' takes a type and the title of the resource we'd like to verify:

$ puppet resource user testuser
user { 'testuser':
  ensure => 'present',
  gid    => '1007',
  home   => '/home/testuser',
  bash  => '/bin/sh',
  uid    => '1004',
}
$ puppet resource package vim-haproxy
package { 'vim-haproxy':
  ensure => '1.4.24-1',
}
$ puppet resource user testuser2
user { 'testuser2':
  ensure => 'absent',
}

Let's consider the output of the 'puppet resource' verification commands above. If you are familar with the Puppet DSL you'll notice that the 'puppet resource' command spits out valid Puppet DSL which represents the current config state of our node. I threw in an extra query for the user 'testuser2' to demonstrate the output of the 'puppet resource' command when we query for a resource which is not present on the node. 'testuser2' is not on our node so we'd have to 'ensure => absent' in Puppet code to get the current config state.

Some readers might be saying, "I'm already familar with the 'id' and 'dpkg' commands on Ubuntu so why bother with the 'puppet resource' queries at all?" Well, consider that once you become proficient with Puppet you might be writing Puppet code to manage users and packages on many different types of systems and 'dpkg' and 'id' may not be present on all of those platforms. 'puppet resource' should work everywhere you have Puppet installed!

We've had a quick fly-by of how Puppet works. There's plenty more to learn at http://docs.puppetlabs.com. Let's get back to our discussion of noop mode.

What is noop mode?

In the examples I provided above we converged a node from its present state to the desired state as captured in our Puppet codebase. That workflow is fine if we have spun up devtest nodes for testing our Puppet code but it's not always possible to have a devtest node which perfectly mirrors the config state of a node in our production environment. Puppet's noop mode allows us to see the changes Puppet would like to make while preventing Puppet from actually performing the convergence. Once we start digging into Puppet we are going to see there are lots of different places we can enable noop mode and where we choose to enable noop mode has an impact on which and how many changes we prevent.

In this article we are going to cover only the three most widely known ways of enabling noop mode:

  1. in puppet.conf
  2. as a command-line parameter to the 'puppet agent' and 'puppet apply' commands.
  3. as a resource metaparameter

Enabling noop mode in puppet.conf

If we want absolute assurance that Puppet will only ever report the changes it would like to make without ever actually making those changes we can do so via a setting in the node's puppet.conf.

FLOSS Puppet and Puppet Enterprise have the puppet.conf file in slightly different locations. The easiest way to find your puppet.conf file is with the following command:

$ sudo puppet agent --configprint config
/etc/puppet/puppet.conf

If you wanted to force noop mode for all Puppet agent runs you simply set 'noop = true' in the '[agent]' section of your puppet.conf as shown below:

# puppet.conf
...
[agent]
  noop = true

Let's see how that impacts a Puppet agent run after we unwind the changes we made in our previous testing:

$ sudo puppet resource user testuser ensure=absent
Notice: /User[testuser]/ensure: removed
user { 'testuser':
  ensure => 'absent',
}
$ sudo puppet resource package vim-haproxy ensure=absent
Notice: /Package[vim-haproxy]/ensure: removed
package { 'vim-haproxy':
  ensure => 'purged',
}
$ sudo puppet agent -t
Notice: Compiled catalog for testnode.private in environment production in 0.69 seconds
Notice: /Stage[main]/Main/Package[vim-haproxy]/ensure: current_value purged, should be present (noop)
Notice: /Stage[main]/Main/User[testuser]/ensure: current_value absent, should be present (noop)
Notice: Class[Main]: Would have triggered 'refresh' from 2 events
Notice: Stage[main]: Would have triggered 'refresh' from 1 events
Notice: Finished catalog run in 0.53 seconds

We can see by the '(noop)' at the end of the lines above that Puppet wanted to converge the user and the package but did not due to our noop setting. If you run 'puppet resource' for the user and the package now the results of the queries would verify that neither testuser nor the vim-haproxy package are installed.

Congratulations, you have absolute assurance Puppet is never going to converge your node but, um, that's kind of the point of a configuration management system so, in the end, perhaps that is not very useful. Go ahead and remove the 'noop = true' from your puppet.conf and we'll look at a couple more ways you can enable noop mode.

Enabling noop mode during manual 'puppet agent' or 'puppet apply' runs

I'm not going to get into the difference between the 'puppet agent' and 'puppet apply' commands in this article but you can click through the provided links to dig into the details. For now it is sufficient to say they are two ways you can converge your system to a state specified in Puppet code. For either command if you instead preferred to see what Puppet would like to change without actually making any changes you could do so like so:

$ puppet agent -t --noop
$ puppet apply --noop /path/to/puppetcode.pp

and you will see output appended with '(noop)' and could verify the config state of your node had not changed with 'puppet resource' queries just as in the case where you had specified noop mode in your puppet.conf.

Enable noop mode as a Resource Metaparameter

To this point in the article I've held off on some Puppet DSL jargon which is now necessary. First, please take a look at the following Puppet DSL code:

user { 'testuser':    
  ensure => present,    
  noop => true,    
}    

package { 'vim-haproxy':    
  ensure => present,    
}

In the code shown above we have two 'resources':

  1. a user resource with title 'testuser' with attribute ensure value present.
  2. a package resource with title 'vim-haproxy' with attribute ensure value present.

There's something a little special about the user resource though. There's an additional attribute 'noop' with a value of 'true'. The 'noop' attribute is what's called a metaparameter. Let's just say that metaparameters are present for all Types in the Puppet DSL and leave it at that for now. The important take-away here is that for any resource type you are attempting to manage in the DSL you can set 'noop => true' and even if you have not enabled noop mode from puppet.conf nor from the command-line that particular resource will not be converged. Also important to note is that barring a change in the puppet.conf or a noop passed at the command-line the package the above code will install the vim-haproxy package. The noop metaparameter is a way to enforce noop mode with a very fine resolution. In this case for just a single resource.

Where have we been and where are we going?

In this article I provided some very basic examples of Puppet code and we talked a little bit about config state convergence. We also stepped through various ways you can enable Puppet's "noop mode" to get a convergence dry-run. Noop mode is useful in lots of situations but our particular use case is for testing config state changes on production systems while ensuring we aren't going to make any changes to that production system and therefore potentially break production. We saw three places where we could easily enable noop mode even as a Puppet beginner.

Upcoming articles will require more advanced understanding of the Puppet DSL and the Puppet stack. I will introduce more sophisticated ways of utilizing noop mode for testing and verification.

Please stay tuned and thanks for reading.