TechOpsGuys.com Diggin' technology every day

August 27, 2015

Container Hype

Filed under: Random Thought — Tags: — Nate @ 5:03 am

(You can see part two of my thoughts on containers here.)

I’ll probably regret this post for a little while at least. Because I happened to wake up at about 3AM this morning and found myself not really falling back asleep quickly and I was thinking about this article I read last night on Docker containers and a couple skype chats I had with people regarding it.

Like many folks, I’ve noticed a drastic increase in the hype around containers, specifically Docker stuff over the past year or so. Containers are nothing new, there have been implementations of them for a decade on Solaris anyway. I think Google has been using them for about a decade, and a lot of the initial container infrastructure in Linux (cgroups etc) came from Google. Red Hat has been pushing containers through their PaaS system Openshift I want to say for at least five years now since I first started hearing about it.

My personal experience with containers is limited – I have deployed a half dozen containers in production for a very specific purpose a little over a year ago using LXC on Ubuntu. No Docker here, I briefly looked into it at the time and saw it provided no value to what I do so I went with plain LXC. The containers have worked well since deployment and have completely served their purpose. There are some serious limitations to how containers work(in the Linux kernel) which today prevent them from being used in a more general sense, but I’ll get to that in another post perhaps (this one ended up being longer than I thought). Perhaps since last year’s deployment some of those issues have been addressed I am not sure. I don’t run bleeding edge stuff.

I’ll give a bit of my own personal experience here first so you get an idea where I am coming from. I have been working in technical operations for organizations (five of them at this point) running more or less SaaS services (though internally I’ve never heard that term tossed about at any company, we do run a hosted application for our customers) for about 12 years now. I manage servers, storage, networking, security to some degree, virtualization, monitoring etc(the skills and responsibilities have grown over time of course). I know I am really good at what I do(tried to become less modest over recent couple of years). The results speak for themselves though.

My own personal experience again here – I can literally count on one hand the number of developers I have worked with over the years that stand out as what I might call “operationally excellent”. Fully aware of how their code or application will work in production and builds things with that in mind, or knows how to engage with operations in really productive way to get questions answered on to how best to design or build things. I have worked with dozens of developers(probably over 100 at this point), some of them try to do this, others don’t even bother for some reason or another.  The ones I can count on one hand though, truly outstanding, a rare breed.

Onto the article. It was a good read, my only real question is does this represent what a typical Docker proponent thinks of when they think of how great Docker is or how it’s the future etc. Or is there a better argument. Hopefully this represents what a typical Docker person thinks so I’m not wasting my time here.

So, to address point by point, try to keep it simple

Up until now we’ve been deploying machines (the ops part of DevOps) separately from applications (the dev part). And we’ve even had two different teams administering these parts of the application stack. Which is ludicrous because the application relies on the machine and the OS as well as the code, and thinking of them separately makes no sense. Containers unify the OS and the app within the developer’s toolkit.

This goes back to experience. It is quite ludicrous to expect the developers to understand how to best operate infrastructure components, even components to operate their own app (such as MySQL) in a production environment. I’m sure there are ones out there that can effectively do it(I don’t claim to be a MySQL expert even myself having worked with it for 15 years now I’ll happily hand that responsibility to a DBA as do most developers I have worked with), but I would wager that number is less than 1 in 50.

Operating things in a development environment is one thing, go at it, have a VM or a container or whatever that has all of your services. Operating correctly in production is a totally different animal. In a good production environment (hopefully in at least one test environment as well) you have significantly more resources to throw at your application to get more out of it. Things that are just cost prohibitive or even impossible to deploy at a tiny scale in a development environment(when I say development environment I imply that it runs on a developer laptop or something). Even things like connectivity to external dependencies likely don’t exist in a development environment. For all but the most basic of applications production will always be significantly different in many ways. That’s just how it is. You can build production so it’s really close to other environments or even exactly the same but then you are compromising on so much functionality, performance, scalability that you’ve really just shot yourself in the foot and you should hope you don’t get to any kind of thing resembling scale (not “web scale” mind you) because it’s just going to fall apart.

Up until now, we’ve been running our service-oriented architectures on AWS and Heroku and other IaaSes and PaaSes that lack any real tools for managing service-oriented architectures. Kubernetes and Swarm manage and orchestrate these services

First off I’m happy to admit I’ve never heard of Kubernets and Swarm, I have heard of Heroku but no idea what it does. I have used AWS in the past (for about 2 years – worst experience of my professional career, I admit I do have PTSD when it comes to Amazon cloud).

I have worked with service-oriented architectures for the past 12 years. My very first introduction to SaaS was an EXTREMELY complicated Java platform that ran primarily on Weblogic+Oracle DB on the back end, with Apache+Tomcat on the front end. Filled with Enterprise Java Beans(EJB), and just dozens of services. Their policy was very tight, NOTHING updates the DB directly without going through a service. No “manual” fixes or anything via SQL(only company I’ve worked at with that kind of policy). Must write an app or something to interface with a service to fix issues. They stuck to it from what I recall while I was there anyway, I admire them for that.

At one point I took my knowledge of the app stack, and proposed an very new architecture for operational deployment, it was much more expensive, because this was before wide spread use of VM technology or containers in general. We split the tomcat tiers up for our larger customers into isolated pools(well over 200 new physical servers! that ran at under 5% cpu in general!). The code on all systems was the same but we used the load balancer to route traffic for various paths to different sets of servers. To some extent this was for scaling but the bigger problem this “solved” was something more simple operationally (but was not addressed to-date in the app) – logging. This app generated gobs of logging from tons of different subsystems (all of it going to centralized structure on each system) making it very difficult to see what log events belonged to what subsystem.  It could take hours to trace transactions through the system. Something as simple as better logging, which developers ignored forever, we were able to address by splitting things out. The project started out small scale but ballooned quickly as other people piled in. Approvals came fast and hard for everything. My manager said  “aim for the sky because they will reject some things”. I aimed for the sky and got everything I asked for(several million $ worth). I believe eventually they moved to a VM model a few years after I left. We tried to get the developers to fix the code, it never happened, so we did what we had to do to make things more manageable. I recall most everyone’s gleeful reaction the first time they started using the new systems. I know it’s hard to believe, you had to be there to see what a mess it was.

Though the app itself was pretty terrible. I remember two members of my team quit within a month and both said something along the lines of “we’ve been at the company 6-9 months and still don’t understand how the application works” (and their job was in part supporting production issues like mine was, I was as close to an expert in the operation of that application as one could get, it wasn’t easy). The data flows of this application were a nightmare, it was my first experience working in SaaS, so as far as I knew it was “normal”. But if I were exposed to that today I would run away screaming. So. many. outages. (bad code, and incredibly over designed) I remember one developer saying “why weren’t you in our planning meeting last year when we were building this stuff?” I said back something like “I was too busy pulling 90 hour weeks just keeping the application running, I had no time for anything else”. I freely admit these days I burned out hard core at that company, took me more than three years to recover. I don’t regret it, it was a good experience, I learned a lot, I had some fun. But it cost me a lot as well. I would not do it again at this point in my career, but if I had the ability to talk to my 2003 self I would tell me to do it.

My first exposure to “micro services” was roughly 2007 at another SaaS company, these services specifically were built with Ruby on Rails of all things. There were a few different ways to approach deploying it. By this time I had started using VMware ESX (my first production deployment of VMware was GSX 3.0 in 2004 in a limited scope production deployment at the previous company I referred to).

Perhaps the most common way would of just been to have an apache instance and the various applications inside of it, keep it simple. Another approach might of been to leverage VMware in a larger scope and build VMs for each micro service (each one had a different code base in subversion, not like it was a bunch of services in a single code base). I took a different approach though, an approach I thought was better(at the time anyway, I still think it was a good choice). I decided to deploy each service on it’s own apache instance(each listening on a different port) on a single OS image (CentOS or Fedora at the time) running on physical hardware. We had a few “web” servers each running probably 15 apache instances with custom configurations managed by CFengine. The “micro services” talked to each other through a F5 BigIP load balancer. We had other services on these boxes as well, the company had a mod_perl based application stack, and another Tomcat-based application, these all ran on the same set of servers.

A common theme for this for me is, twelve years of working with services oriented architectures, and eight years of working with “micro services” and I’ve never needed special sauce to manage them.

Up until now, we have used entire operating systems to deploy our applications, with all of the security footprint that they entail, rather than the absolute minimal thing which we could deploy. Containers allow you to expose a very minimal application, with only the ports you need, which can even be as small as a single static binary.

This point seems kind of pointless to me. Operating systems are there to provide the foundation of the application. I like the approach of trying to keep things common. That is the same operating system across as many of the components as possible – keeping in mind there are far more systems involved than just the servers that “run the application”. While saying minimal exposure is a nice thing to have, at the end of the day it really doesn’t matter(it doesn’t noticeably or in most cases measurably impact operation of the application, but it does improve manageability).

Up until now, we have been fiddling with machines after they went live, either using “configuration management” tools or by redeploying an application to the same machine multiple times. Since containers are scaled up and down by orchestration frameworks, only immutable images are started, and running machines are never reused, removing potential points of failure.

I’ll preface this by saying I have never worked for an organization that regularly or even semi regularly scaled up and scaled down their infrastructure(even while working in Amazon cloud). Not only have they never really done it, but they’ve never really needed to. I’m sure there are some workloads that can benefit from this, but I’m also sure the number is very small. For most you define a decent amount of headroom for your application to burst into and you let it go, and increase it as required(if required) as time goes on with good monitoring tools.

I’ll also say that since I led the technical effort behind moving my current organization out of Amazon cloud in late 2011(that is what I was hired to do, I was not going to work for another company that used them on a regular basis. Per earlier point we actually intended to auto scale up and down but at the end of the day it didn’t happen), we have not had to rebuild a VM, ever.  Well over three years now with never having had to rebuild a VM (well there is one exception where we retired one of our many test environments at one point only to have a need for it again a few months later, NOTHING like our experience with public cloud though). So yeah, the lifetimes of our systems are measured in years, not in hours, days, or weeks. Reliability is about as good as you can get in my opinion(again record speaks for itself). We’ve had just two sudden hardware failures causing VMs to fail in the past 3 and a half years. In both cases VMware High availability automatically restarted the affected VMs on other hosts within a minute, and HP’s automatic server recovery rebooted the hosts in question (in both cases had to get system boards replaced).

Some people when thinking of public cloud say “oh but how can we operate this better than Amazon, or Microsoft etc”. I’m happy to admit now that I KNOW I can operate things better than Amazon, Microsoft, Google etc. I’ve demonstrated it for the past decade, and will continue to do so. Maybe I am unique, I am not sure (I don’t go out of my way to socialize with other people like me). There is a big caveat to that statement that again I’m happy to admit to. The “model’ of many of the public cloud players is radically different from my model. The assumptions they make are not assumptions I make (and vise versa). Their model in order to operate well you really have to design your app(s) to handle it right. My model you don’t. I freely admit my model would not be good for “web scale”, just like their model is not good for the scale any company I have worked at for the past 12 years. Different approaches to solve similar issues.

Up until now, we have been using languages and frameworks that are largely designed for single applications on a single machine. The equivalent of Rails’ routes for service-oriented architectures hasn’t really existed before. Now Kubernetes and Compose allow you to specify topologies that cross services.

I’ll mostly have to skip this one as it seems very code related, I don’t see how languages and frameworks have any bearing on underlying infrastructure.

Up until now, we’ve been deploying heavy-weight virtualized servers in sizes that AWS provides. We couldn’t say “I want 0.1 of a CPU and 200MB of RAM”. We’ve been wasting both virtualization overhead as well as using more resources than our applications need. Containers can be deployed with much smaller requirements, and do a better job of sharing.

I haven’t used AWS in years, in part because I believe they have a broken model, something I have addressed many times in the past. I got upset with HP when they launched their public cloud and launched with a similar model. I believe I understand why they do things this way (because doing it “right” at “large scale” is really complicated).

So this point is kind of moot. I mean people have been able to share CPU resources across VMs for well over a decade at this point(something that isn’t possible in all major public cloud providers). I also share memory to an extent(this is handled transparently with the hypervisor). There is certainly overhead associated with VM, and with a “full” operating system image, but that is really the price you pay for the flexibility, and manageability that those systems offer. It’s a price I’m willing to pay in a heartbeat, because I know how to run systems well.

Up until now, we’ve been deploying applications and services using multi-user operating systems. Unix was built to have dozens of users running on it simultaneously, sharing binaries and databases and filesystems and services. This is a complete mismatch for what we do when we build web services. Again, containers can hold just simple binaries instead of entire OSes, which results in a lot less to think about in your application or service.

(side note, check the container host operating system, yeah that one that is running all of the native processes in the same kernel – yes a multi user operating system running dozens of services on it simultaneously, a container is just a veil..)

This is another somewhat moot point. Having a lot less to think about, certainly from an OS perspective to me makes things more complicated. If your systems are so customized that each one is different that makes life more difficult. For me I can count on a common set of services and functionality being available on EVERY system. And yes, I even run local postfix services on EVERY system(oh, sorry that is some overhead). To be clear postfix is there as a relay for mail which is then forwarded to a farm of load balanced utility services which then forward onto external SMTP services. This is so I can do something as simple as “cat some file| mail my.email.address” and have it work.

Now we do limit what services run, e.g. Chef(current case) or CFengine(prior companies) only runs our core services, extra things that I never use are turned off. Some services are rare or special. Like FTP for example. I do have a couple of uses cases for running a FTP server still, and in those cases FTP services only run on the systems that need it. And obviously from an application standpoint not all apps run on all app servers.  But this kind of stuff is pretty obvious.

At the end of the day having these extra services provides convenience not only to us, but to the developers as well. Take postfix as an example. Developers came to me one day saying they were changing how they send email, instead of interfacing with some external provider via web API their new provider they will interface with SMTP. So where do they direct mail to? My answer was simple – in all cases, in all environments send mail to localhost, we’ll handle it from there. Sure you can put custom logic in your application or custom configurations for each environment if you want to send directly to our utility servers, but we sure as hell don’t want you to try to send mail directly from the web servers to the external parties, that’s just bad practice (for anything resembling a serious application assuming many servers supporting it and not just a single system running all services). The developers can easily track progress of the mail as it arrives on the locahost MTA, and is then immediately routed to the appropriate utility server farm (different farms for different environments due to network partitioning to prevent QA systems for example from talking to production, also each network zone(3 major zones) has a unique outbound NAT address, which is handy in the event of needing IP filters or something).

So again, worrying about these extra services is worrying about nothing in my opinion. My own experience says that the bulk of the problems with applications are code based, sometimes design, sometimes language, sometimes just bugs. Don’t be delusional and think that by deploying containers that will somehow magically make the code better and the application scale and be highly available. It’s addressing the wrong problem, it’s a distraction.

Don’t get me wrong though, I do believe containers do  have valid use cases, which I may cover in another post this one is already pretty long. I do use some containers myself (not Docker). I do see value in providing an “integrated” experience (download one file and get everything – even though that has been a feature in virtualized environments for probably close to a decade with OVF as well). That is not a value to me, because as an experienced professional it is rare that something works properly “out of the box”, at least as far as applications go. Just look for example at how Linux distributions package applications, many have their own approach on where to put files, how to manage things etc. That’s the simplest example I can give. But I totally get it for constrained environments it’s nice to be able to get up to speed quickly with a container. There are trade offs certainly once you get to “walking” (thinking baby step type stuff here).

There is a lot of value that operating systems and hypervisors and core services provide. There is overhead associated with them certainly. In true hyper scale setups this overhead is probably not warranted (Google type scale). I will be happy to argue till I am somewhat blue in the face that 99.99% of organizations will never be at that scale, and trying to plan for that from the outset is a mistake(and I’d wager almost all that try, fail because they over design or under design), because there is not one size that fits all. You build to some reasonable level of scale, then like anything new requirements likely come in, and you re evaluate, re-write, re-factor or whatever.

It’s 5AM now, I need to hit the gym.

March 4, 2015

Sign off ?

Filed under: Random Thought — Nate @ 9:59 am

So I apologize (again) for not posting much, and not replying to comments recently.

I suppose it’s obvious I haven’t posted in a long time. I have mentioned this many times before but there really isn’t much in tech that has gotten me excited in probably the past two years. I see new things and am just not interested anymore for whatever reason.

I have been spending some time with the 3PAR 7450 that I got late last year that is a pretty cool box but at the end of the day it’s the same 3PAR I’ve known for the past 8 years just with SSDs and dedupe (which is what I wanted, I needed something I felt I could rely on for the business I work for, I have become very conservative when it comes to storage over the years).

That and there’s been a lot of cool stuff going on with me outside of tech so I am mostly excited about that and have been even less focused on tech recently.

I pushed myself harder than I thought possible for more than a decade striving to be the best that I could be in the industry and think I accomplished a lot (at one point last year a former boss of mine said they hired 9 people to do my job after I left that particular company. Other positions are/were similar, perhaps not as extreme.)

I am now pushing myself harder than I ever thought possible in basically everything BUT tech, in part to attempt to make up for sacrifices made over the previous decade. So I am learning new things, just not as much in technology and I don’t know how long this journey will take.

I can’t put into words how excited I am.

Tech interesting areas I have spent some time on in recent months that may get a blog post at some point include:

  • LogicMonitor – the most advanced/easy to use dashboarding/graphing system I’ve ever come across. It does more than dashboards and graphs but to-date that is all I’ve used it for and it pays for itself 5x over with that alone for me. I’ve spent a bunch of time porting my custom monitors over to it including collecting more than 12,000 data points/minute from my 3PAR systems! I can’t say enough good things about this platform from the dashboard/graphing standpoint(since that is all I use it for right now)
  • ScaleArc – Sophisticated database availability tool. For me using it for MySQL though they support other DBs as well. Still in the very early stages of deployment.
  • HP StoreOnce – not sure I have much to write about this, since I only use it as a NAS, all of the logic is in my own scripts. But getting 33.6:1 reduction in data on 44TB of written user data is pretty sweet for me, beats the HELL out of the ZFS system I was using for this before(maybe 5:1 reduction with ZFS).

So, this may be the last blog post for a while(or forever) I am not sure, for anyone out there still watching, thanks for reading over the years, thanks for the comments, and wish you the best!

 

November 7, 2014

Two factor made easy

Filed under: Random Thought,Security — Nate @ 12:04 am

Sorry been really hammered recently, just spent the last two weeks in Atlanta doing a bunch of data center work(and the previous week or two planning for that trip), many nights didn’t get back to the hotel until after 7AM .. But got most of it done..still have a lot more to do though from remote.

I know there has been some neat 3PAR announcements recently I plan to try to cover that soon.

In the meantime onto a new thing to me: two factor authentication. I recently went through preparations for PCI compliance and among those things we needed two factor authentication on remote access. I had never set up nor used two factor before. I am aware of the common approach of using a keyfob or mobile app or something to generate random codes etc. Seemed kind of, I don’t know, not user friendly.

In advance of this I was reading a random thread on slashdot something related to two factor, and someone pointed out the company Duo Security as one option. The PCI consultants I was working with had not used it and had proposed another (self hosted) option which involved integrating our OpenLDAP with it, along with radius and mysql and a mobile app or keyfob with codes and well it just all seemed really complicated(compounded by the fact that we needed to get something deployed in about a week). I especially did not like the having to type in a code bit. I mean it wasn’t too much before that I got a support request from a non technical user trying to login to our VPN – she would login and the website would prompt her to download & install the software. She would download the software (but not install it) and think it wasn’t working – then try again (download and not install). I wanted something simpler.

So enter Duo Security, a SaaS platform for two factor authentication that integrates with quite a bit of back end things including lots of SSL and IPSec VPNs (and pretty much anything that speaks Radius which seems to be standard with two factor).

They tie it all up into a mobile app that runs on several different major mobile platforms both phone and tablet. The kicker for them is there are no codes. I haven’t seen any other two factor systems personally that are like this (have only observed maybe a dozen or so, by no means am I an expert at this). The ease of use comes in two forms:

Inline self enrollment for our SSL VPN

Initial setup is very simple, once the user types their username and password to login to the SSL VPN (which is browser based of course), an iframe kicks in (how this magic works I do not know) and they are taken through a wizard that starts off looking something like this

duo-choose-device

No separate app, no out of band registration process.

By comparison (what prompted me to write this now) is I just went through a two factor registration process for another company (which requires it now) who uses something called Symantec Validation & ID Protection which is also a mobile app. Someone had to call me on the phone, I told them my Credential ID, and a security code, then I had to wait for the 2nd security code and told them that, and that registered my device with whatever they use.  Compared to Duo this is a positively archaic solution.

Yet another service provider I interact with regularly recently launched (and is pestering me to sign up for) two factor authentication – they too use these old fashioned codes. I’ve been hit with more two factor related things in the past month than in the past probably 5 years or something.

preactivate-duo-mobile-san

Sync your phone with Duo security by scanning a QR code with your phone (obscured the QR code a bit just in case that has sensitive info in it)

By contrast the self enrollment in Duo is simple, requires no interaction on my part, users can enroll whenever they want. They can even register multiple devices on their own, and add/delete devices if they wish.

One of the times during testing I did have an issue scanning the QR code, which normally takes about 2 seconds on my phone. I was struggling with it for a minute or two, until I realized my mouse cursor was on top of it, which was blocking the scan from working. Maybe they could improve it by somehow cloaking the mouse cursor with javascript or something if it goes over the code, I don’t know.

Don’t have a mobile app? Duo can use those same old fashioned key codes too(by their or 3rd party keyfob or mobile app), or they can send you a SMS message, or make a voice call to you (the prompt basically says hit any button on the touch tone phone to acknowledge the 2nd factor — of course that phone# has to be registered with them).

Simply press a button to acknowledge 2nd factor

The other easy part is there is of course no codes to have to transcribe from a device to the computer. If you are using the mobile app, upon login you get a push notification from the app (in my experience more often than not this comes in less than 2 seconds after I try to login). The app doesn’t have to be running (it runs in the background even if you reboot your phone). I get a notification in Android (in my case) that looks like this:

duo-android-sanDuo integrated nicely into Android

I obscured the IP address and the company name just to try to keep this not associated with the company I work for. If you have the app running in the foreground you can see a full screen login request similar to the smaller one above. If for some reason you are not getting the push notification you can use tell the app to poll the Duo service for any pending notifications(only had to do that once so far).

The mobile app also has one of those number generator things so you can use that in the event you don’t have a data connection on the phone. In the event the Duo service is off line you have the option of disabling 2nd factor automatically(default) so them being down doesn’t stop you from getting access, or if you prefer ultra security you can tell the system to prevent any users from logging in if the 2nd factor is not available.

Normally I am not one for SaaS type stuff – really the only exception is if the SaaS provides something that I can’t provide myself. In this case the simple two factor stuff, the self enrollment, the ability to support SMS and phone voice calls(of which about a half dozen of my users have opted to use) is not anything I could of setup in a short time frame anyway (our PCI consultants were not aware of any comparable solution – and they had not worked with Duo before).

Duo claims to be able to setup in just a few minutes – for me the reality was a little different, the instructions they had were only half what I needed for our main SSL VPN, I had to resort to instructions from our VPN appliance maker to make up the difference (and even then I was really confused, until support explained it to me. Their instructions were specifically for two factor on IOS devices though applied to my scenario as well). For us the requirement is that the VPN device talk to BOTH LDAP and Radius. LDAP stores the groups that users belong to, and those groups determine what level of network access they get. Radius is the 2nd factor(or in the case of our IPSec VPN the first factor too more on that in a moment). In the end it took me probably 2-3 hours to figure it out, about half of that was wondering why I couldn’t login(because I hadn’t setup the VPN->LDAP link so the authentication wasn’t getting my group info so I was not getting any network permissions).

So for our main SSL VPN, I had to configure a primary and a secondary authentication, and initially with Duo I just kept it in pass through mode (only talking to them and not any other authentication source) because the SSL VPN was doing the password auth via LDAP.

When I went to hook up our IPSec VPN that was a different configuration, that did not support dual auth of both LDAP and Radius, it could do LDAP group lookups and password auth with radius though.  So I put the Duo proxy in a more normal configuration which meant I needed another Radius server that was integrated with our LDAP(which runs on the same VM as the Duo proxy on a different port) that the Duo proxy could talk to(talks to localhost) in order to authenticate passwords. So the IPSec VPN would send a radius request to the Duo proxy which would then send that information to another Radius (integrated with LDAP) and to their SaaS platform, and give a single response back to allow or deny the user.

At the end of the day the SSL VPN ends up authenticating the user’s password twice (once via LDAP once via RADIUS), but other than being redundant there is no harm.

Here is what the basic architecture looks like, this graphic is more ugly than my official one since I wanted to hide some of the details, you can get the gist of it though

DuoSecurity_Deployment_sanity

Two factor authentication for SSL, IPSec and SSH with redundancy

The SSL VPN supported redundant authentication schemes, so if one Duo proxy was down it would fail back to another one, the problem was the timeout was too long, it would take upwards of 3 minutes to login(and you are in danger of the login timing out). So I setup a pair of Duo proxies and am load balancing between them with a layer 7 health check. If a failure occurs there is no delay in login and it just works better.

As the image shows I have integrated SSH logins with Duo as well in a couple of cases, there is no inline pretty self enrollment, but if you happen to not be enrolled, the two factor process with spit out a url to put into your browser upon first login to the SSH host to enroll in two factor.

I deployed the setup to roughly 120 users a few weeks ago, and within a few days roughly 50-60 users had signed up. Internal IT said there were zero – count ’em zero – help desk tickets related to the new system, it was that easy and functional to use. My biggest concern going into this whole project was tight timelines and really no time for any sort of training. Duo security made that possible (even without those timelines I still would of preferred this solution — or at least this type of solution assuming there is something else similar on the market I am not aware of any).

My only support tickets to-date with them were two users who needed to re-register their devices(because they got new devices). Currently we are on the cheaper of the two plans which does not allow self management of devices. So I just login to the Duo admin portal, delete their phone and they can re-enroll at their leisure.

Duo’s plans start as low as $1/user/month. They have a $3/user/month enterprise package which gives more features. They also have an API package for service providers and stuff which I think is $3/user/year (with a minimum number of users).

I am not affiliated with Duo in any way, not compensated by them, not bribed not given any fancy discounts.. but given I have written brief emails to the two companies that have recently deployed two factor I thought I would write this so I could point them and others to my story here to get more insight on a better way to do two factor authentication.

August 19, 2014

Sprint screwing their subscribers again

Filed under: Random Thought — Tags: — Nate @ 12:15 pm

As a former Sprint customer for more than a decade I though this was interesting news.

My last post about Sprint was appropriately titled “Can Sprint do anything else to drive me away as a customer“. I left Sprint less because I did not like them/service/etc and really more because I wanted to use the HP Pre 3 which was GSM, which meant AT&T (technically could of used T-Mobile but the Pre 3 didn’t support all of T-mobile’s 3G frequencies which meant degraded service coverage). So I was leaving Sprint regardless but they certainly didn’t say or do anything that made me want to second guess that decision.

Anyway, today Sprint announces a big new fancy family plan that is better than the competition.

Except there is one glaring problem with this plan

[..]you’ll have to sign-up between Aug. 22 and Sept. 30, and current subscribers cannot apply.

Yeah, Sprint loves their customers.

On that note I thought this comment was quite interesting on El Reg:

[..]They combine Verizon-level arrogance with truly breath-taking incompetence into one slimy package. Their network stinks, it’s the slowest of the Big Four (and not by a small margin, either), their customer service makes Comcast look good[..]

 

August 16, 2014

Blog spam stats

Filed under: Random Thought — Nate @ 9:15 am

I just upgraded my Akismet plugin for the first time in a long time and this version gives me all sorts of fun stats about the spam that comes through here (they don’t count my posts as SPAM but maybe they should consider that).

Anyway, the first one was somewhat startling to me, perhaps it shouldn’t be but it was anyway, I had to go back and look when I told wordpress to close comments off on posts older than 90 days (that was done entirely to limit impact of spam see side bar I have a note about re-opening comments if you wish to comment on an older post for a temporary amount of time.

So fortunately my apache logs go back to December 19 2013 as when I did this. Behold the impact!

Impact of disabling comments on posts older than 90 days

Impact of disabling comments on posts older than 90 days

The last 5 months of 2013 generated 97,055 spam, vs the first 8 months(so far) of 2014 has generated 6,360 spam (not even as much as August 2013 alone).

Next up is the all time spam history, which just goes back to 2012, I guess they were not collecting specifics on stats before that I have been a subscriber to this service for longer than that for sure.

TechOpsGuys spam all time

TechOpsGuys spam all time

I’ve never really managed spam here, I rarely look at what is being blocked well there is so much(even now).

June 19, 2014

My longest road trip to-date

Filed under: Random Thought — Tags: — Nate @ 11:37 am

I got back from the longest road trip I’ve ever personally driven anyway to-date on Tuesday.

Pictures in case your interested, I managed to cut them down to roughly 600:

 

Long road trip June 2014 - 2,900 miles total

Long road trip June 2014 – 2,900 miles total

California

I decided to take the scenic route and went through Yosemite on the way over, specifically to see Glacier Point, a place I wasn’t aware of and did not visit on my last trip through Yosemite last year. I ended up leaving too late so managed to get to Glacier point and take some good pictures, though by the time I was back on the normal route it was pretty much too dark to take pictures of anything else in Yosemite. I sped over towards Tonopah, NV for my first night’s stay before heading to Vegas the next day. That was a fun route, at least once I got near the Nevada border at that time of night I didn’t see anyone on the road for a good 30-40 or more miles (had to slow down on some areas of the road I was getting too much air! literally!). Though I did encounter some wildlife playing in the road, fortunately managed to avoid casualties.

Las Vegas area

I took a ferry tour on Lake Mead, that was pretty neat (was going to say cool but damn was it hot as hell there my phone claimed 120 degrees from it’s ambient temp sensor, car said it was 100). That ferry is the largest boat on the lake by far, and there wasn’t many people on there for that particular tour on that day, maybe 40 or so out of probably 250-300 that it can hold. I was surprised given the gajillions of gallons of water right there that the surrounding area was still so dry and barren, so the pictures I took weren’t as good as I thought they might of been otherwise.

I went to the Hoover dam for a few minutes at least, couldn’t go inside as I had my laptop bag with me(wasn’t checked into hotel yet) and they wouldn’t let me in with the bag, and I wasn’t going to leave it in my car!

HP Discover

(you can see all of my Discover related posts here)

A decent chunk of it was in Las Vegas at HP Discover where I am grateful for the wonderful folks over there which really made the time quite pleasant.

I probably wouldn’t attend an event like Discover even though I know people at HP if it weren’t for the more personalized experience that we got. I don’t like to wander around show floors and go into fixed sessions, I have never gotten anything out of that sort of thing.

Being able to talk in a somewhat more private setting in a room on the show floor with various groups was helpful. I didn’t learn too much new things, but was able to confirm several ideas I already had in my head.

I did meet David Scott, head of HP storage for the first time, and ran into him again at the big HP party and he came over and chatted with Calvin Zito and myself for a good 30 minutes. He’s quite a guy I was very impressed. I thought it was funny how he poked fun at the storage startups during the 3PAR announcements. It was very interesting to hear his thoughts on some topics. Apparently he reads most/all of my blog posts and my comments on The Register too.

We also went up on the High Roller at night which was nice, though couldn’t take any good pictures, was too dark  and most things just ended up blurry.

All in all it was a good time, met some cool people, had some good discussions.

Arizona

I was in the neighborhood, so I decided to check out Arizona again, maybe for the last time. I was there a couple of times in the past to visit a friend who lived in the Tuscon area but he moved away early this year. I did plan to visit Sedona the last time I was in AZ, but decided to skip it in favor of NFL playoffs.  So I went to AZ again in part to visit Sedona which I had heard was pretty.

Part of the expected route to Sedona was closed off due to the recent fire(s), so I had to take a longer way around.

I also decided to visit the Grand Canyon (north end), and was expecting to visit the south end the same day but that food poisoning hit me pretty good right about the time I got to the north end, so I was only there about 45 minutes and I had to go straight back to the hotel (~200 miles away). I still managed to get some good pictures though. There is a little trail that goes out to the edge there, though for the most part had no hand rails, was pretty scary to me anyway being so close to a very big drop off.

Food poisoning settled down by Monday morning and I was able to get out and about after my company asked me to extend my stay to support a big launch (which turned out to be nothing fortunately) and visit more places before I headed back early Tuesday morning. I went through Vegas again and made a couple pit stops before making the long trek back home.

Was a pretty productive trip though got quite a bit accomplished I suppose. One thing I wanted to do is get a picture of my car next to a “Welcome to Sedona” sign to send to one of my former bosses. There was a “secret” project at that company to move out of a public cloud and it was so controversial that my boss gave it a code name of Sedona so we wouldn’t upset people in earlier days of the project. So I sent him that pic and he liked it 🙂

Car's trip meter - need some color on this blog

Car’s trip meter – need some color on this blog (yes that is almost 60 hours of driving over 10 days)

One concern I had on my trip is my car has a time bomb ticking waiting for the engine to explode. I’ve been planning on getting that fixed the next time I am in Seattle, I think I am still safe for the time being given the mileage. The dealership closest to me is really bad (and I complained loudly about them to Nissan) so I will not go there, and the next closest is pretty far away, the operation to repair the problem is a 4-5 hour one and I don’t want to stick around. Besides I really love the service department at the dealership that I bought my car at, and I’ll be back in that area soon enough anyway (for a visit).

 

May 14, 2014

Hooterpalooza 2014

Filed under: Events,Random Thought — Tags: — Nate @ 7:18 pm

(if you prefer you can skip my review and jump straight to the pictures, usual disclaimers apply)

This isn’t directly related to tech but I wanted to write about it a bit, since it was quite a blast. I just went by myself though I made a few new friends.

I’ll apologize now for all misspellings of names, didn’t get any written stuff so just had to wing it.

hooterpalooza2014

I learned about it a couple of weeks ago, though this was I believe their 8th annual event. I purchased a VIP ticket ($100) which included close to stage seating as well as a back stage pass(which was outdoors in 95 degree heat!). The venue was at the Saddle Rack in Fremont, CA. The staff there were very friendly and quick to serve out drinks, of which I had many.

It had representatives from the four bay area Hooters locations, 31 hooters girls in all, last year I was told there was quite a bit more. There were a handful of judges, the only ones I remember were a couple radio DJs from 107.7 The Bone.

I have never been to this kind of event before and I wasn’t sure what to expect, but my expectations were exceeded, it ended around 9:30PM and it packed with entertainment.

One of the host’s was Amanda I think (someone behind me kept yelling her name anyway) she was quite good as well.

HooterPalooza hostess

HooterPalooza hostess

Madman’s Lullaby, which is a local band(from Campbell it seems) here played for quite a while I was very impressed with their talent(I’ve never followed local bands before). They had a very polished performance, by far I think the best live performance I have seen/heard in a club settings (granted I haven’t seen many I usually avoid places with live music it is often too loud – wasn’t in this case). I purchased two of their CDs (professionally made with case, shrink wrap etc no CD-R stuff here), they recently got signed on by a record label(Kivel Records). The album is called Unhinged.

Madman - Lead singer

Dave Friday – Lead Vocals for Madman’s Lullaby

Hooterpalooza - Madman on stage

Madman’s Lullaby performing at Hooterpalooza 2014

I had my phone, and then later went and got my real camera. The lighting in the place was good for watching in person but made it difficult to take pictures(w/o flash), most of which were washed out by the bright spotlight. Auto focus was also very slow due to low surrounding lighting. Video recording was more successful and I was able to take snapshots from the video frames.

I’ll put most of my pictures here if you want to see more in depth coverage. Here is the video of the top five contestants.

I live and work in San Bruno, CA – and the Hooters here is roughly two blocks from my apartment which is convenient. So of course I wanted the San Bruno girls to win.

Hooterpalooza - Dom

Surrender the booty winner

During intermission there was a Surrender the booty contest which was very entertaining, and fortunately a San Bruno Hooters girl won that contest so congrats to Dominique.
 
 
  
 
 
 
 
  
 
 
 

Winners of Hooterpalooza 2014

HooterPalooza - Top 3

From right to left:

  1. First place went to Lexi from Hooters of Dublin, CA (?? not sure the voice was difficult to understand)
  2. Second place went Ariana from Hooters of San Bruno, CA
  3. Third place went to Brittney from Hooters of Dublin, CA

(Fifth place went to San Bruno as well)

For sure the most fun I’ve had (in the bay area) since I moved here almost three years ago. Looking forward to next year’s event!

Entertaining yet accurate video on cloud

Filed under: Random Thought — Tags: — Nate @ 9:38 am

A co-worker pointed this video out to me, from a person at Microsoft research, he starts out by saying the views are his own and do not represent Microsoft.

Cloud comes up at 5:36 into the video. The whole video is good(30min), everything is spot on, and he manages to do it in a very entertaining way.

james-mickens-presentation

He is very entertaining throughout and does a good job at explaining some of the horrors around cloud services. Myself I still literally suffer from PTSD from the few years I spent with the Amazon cloud. That is not an exaggeration, not a joke, it is real.

Sorry for not having posted recently – I have seen nothing that has gotten me interested enough to post on anything. Tech has been pretty boring. I did attend HOOTERPALOOZA 2014 last night though that was by far the best time I’ve had in the bay area since I returned to California almost 3 years ago. I will be writing a review of that along with pictures and video soon, will take a day or two to sort through everything.

The above video was so great though I wanted to post it here.

February 28, 2014

From WebOS to Android: 60 days in

Filed under: Random Thought — Tags: , , , — Nate @ 8:22 pm

About a month ago I wrote about my experience on the first 30 days of switching from a WebOS ecosystem to a Android Ecosystem. Specifically from the never-officially-released HP Pre3 to a Samsung Galaxy Note 3.

There were a few outstanding issues at the time, and I just wanted to write/rant a little bit about one of them.

Wireless Charging

Inductive charging technology has been with the WebOS platform since day one I believe(2009). I had become accustomed to using it, and any future phone really would need to have this for me to feel satisfied. Long ago it fell away from the “nice to have” categories to “cannot live without much pain”. Fortunately some other folks have picked up on wireless charging over recent years though sadly it’s still far from universal.

One of the reasons I liked the Note 3 was it was going to get(and did get) official wireless charging from Samsung. I suppose that is where my happiness came to an end.

I suppose it is semi obvious I wouldn’t be writing about it if my experience was flawless 🙂

Samsung charging accessories

What seems like a month ago now I went to my local Frys and picked up the one wireless charging back cover that I liked for the Note 3, along with a Samsung charging base station. I didn’t want to risk generating an unstable magnetic field in my bedroom and a rip in the space time continuum by buying a second or third rate wireless charger.

There are other back cover(s) available but the other one(s) I saw also included a wrap around front cover which I did not want. This cover looks identical to the stock cover(same color even, and seems like the same size as well though I could be wrong my perception is far from precise).

The Note 3 is a big phone, and it is fairly heavy too (slightly heavier than the HP Pre3) with a stock configuration. With the regular back cover it was fine, with the new back cover I can’t help but think of the word brick come to mind. I mean it is a stark difference – I would say at least 25% heavier than stock. There are no specs that I can find online or on the packaging that talk about the weight of the cover but it’s heavy.  I have gotten used to that heft over the weeks though. The HP Pre3 (and some of the WebOS phones before it I believe with specific exception at least to the original Pre which I owned as well) all came with charging covers built in, so I never had a comparison to make with/without them at the time.

Anyway so I’m past the heft of the new back cover (though compared to a co-workers HTC One with a fancier back cover his phone I think is heavier than mine even though it is smaller, he has a big cover on it though).

Charging experience

UPDATE 2014: after a month of frustration I finally figured out the solution to this problem. I had to remove the back cover, placing it face down on a table and compressing it before putting it back on the phone. The connection from the cover to the phone wasn’t good enough. Since I started doing this whenever I remove the back cover(rare) I haven’t had any issues with the phone not charging.

The next problem came with charging on the pad, it was spotty. There is a green light on the pad that is supposed to tell you when the pad is mated with the phone and is charging. Don’t believe it because it lies to me often. Most of the time it would charge fine, other times it would not. In my earlier days(before I learned that the green light lies to me) I tried just leaving it on the pad overnight with the green light on, woke up the next day with the battery at 10%.

The phone does indicate when it is charging wirelessly. Many times (including right now which prompted me to write this now) the phone just refuses to sit still and charge wirelessly. It will go in and out of charge mode every few seconds, then eventually it seems to give up and does not charge at all, unless of course I hook it to a USB cable. I don’t understand how it could give up like that it doesn’t make any sense to me unless there is a software component, but how could the software component refuse electricity ? I don’t know.

I have spent literally 10 minutes trying every possible position on the pad to have the phone not want to charge. Then other times it works 100% of the time for a day or so.

So I thought hey maybe it’s the crappy Samsung pad, I had read and heard some good things about the Tylt Vu, specifically they claim that they have a better charging area, meaning you can have the phone pretty much at any angle and it will charge. They have wide compatibility but did not specifically mention Note 3 at the time (I assume because the charging covers for Note 3 are still new).

So I ordered two Vus, and tried my phone on the first one – did not charge. I tried again for 5 minutes or so every possible position and it would not charge. Took it to the Samsung pad and I believe it would not charge there either. Filed support ticket with Tylt to see if they had any ideas, meanwhile the Samsung pad started working again with the phone. It charged all night. I got up the next day, battery was full – I played some games for a few minutes battery drained to ~93% – took the phone to the Vu and it would not charge. Took the phone back to the Samsung charger and it would not charge.

Rinse and repeat several times…..eventually I got both the Vus to charge my phone, though it is still sporadic. Tylt was going to replace the Vu but I don’t think it’s the Vu’s fault. Samsung support wasn’t very helpful. I suppose it could be the back cover, but I mean how complex can that be? I’m suspecting more of a design flaw or perhaps a software problem preventing the charging from working. I don’t know. All three chargers have semi sporadic charging results, so I suppose I can rule the chargers out as a cause of the problem.

Next up..

Android day dream

One of the long time cool features of the WebOS devices is a feature called exhibition mode. Basically means when the phone is charging it can launch a screen saver of sorts on the screen, default is a clock but it can do photo slide show as well as some other apps. The HP Touchpad took this to the next level and used a form of NFC to uniquely identify charging stations so the device could launch a different mode depending on what station it is charging off of.

I use this a lot with my Touchpads still they make great digital picture frames, just sit them in the charger and the slide show fires right up. If I want to use it I just pick it up, no wires and off I go..

Android has something similar called day dream. However a flaw in either Android or in Samsung’s code prevents it from working correctly. When day dream is running, the configured application loads, which in my case is a slide show of sorts, and while the battery charges the slide show shows like it should.

The problem comes with the battery gets full – the OS kicks day dream off line and brings back the home screen and shows a notification the battery is full – disconnect the charger. The wireless charging unit stops charging for a minute or so – then the charging kicks in again, and day dream fires up again for about a minute perhaps then is booted off again and well rinse and repeat.

It gets worse though – if I want to use daydream I have to turn it on during the day and turn it off before I goto sleep. Because if daydream is in use at night, I hit the power button to turn off the screen before I go to sleep. Then guess what – when the battery is full the screen lights up and shows that same stupid battery is full message(and the screen does not turn off again). Without daydream the device turns the screen off automatically and stays off until I turn it on or remove it from the charging pad.

Stupid – I would of thought these would be basic things that would of been solved a while ago.

The only problem I really EVER had with wireless charging on WebOS was with the HP Pre3 and the original wireless pucks as they were called(base stations). The design of the Pre3 is slightly different so they don’t fit the older charging stations precisely, even with the built in magnet to help align the phone to the charger sometimes it gets out of alignment and goes into a charging/beeping loop until corrected. Understandable since they were not designed for each other. HP was going to release a newer, significantly more sophisticated charging station for the Pre3(which included wireless audio out too) but of course it never made it to market.

As far as I know, the WebOS phones did not ever “stop” charging when the battery was full, they just keep going. I realize this is not good for the battery but I’ll live with having to replace the battery every year or something if it means the above stuff worked right. In fact I never replaced a battery on a WebOS device in the roughly four years I used them.

Other thoughts

All in all I’m still pretty happy with the Note 3. I mean my phone usage has gone up significantly. I think I can compare it to moving from a feature phone to a smart phone originally. I really did not use the Pre3 very much anymore towards the end. The battery life is not to my expectations. Video playback battery life is excellent(I think CNET recently rated the Note 3 as something like 14 hours). But drive that CPU a bunch and it will chew through battery quick, I think I could fairly easily chew through 30% in an hour at high usage. I haven’t used any new apps since my last blog post, and in fact other than the two games I mentioned that I do play I haven’t touched any of the other games that I had installed either. I have loaded the thing up with pictures though easily 15,000. Also have all of my music on there, lots of video and still have about 25GB available (96GB total).

I also edited the Superbowl to a 19 minute video and have watched that tons of times on my phone(looks amazing). There is another video – an episode of NFL Films presents on the Superbowl I put that on my phone too – also looks incredible(and the episode itself is just awesome). I purchased a pair of Braven bluetooth speakers (originally bought one then got another) which can be paired to each other for stereo playback, they work quite well(and have NFC too).

My mobile data usage has been tiny though since the bulk of my time is either at home or the office where I use wifi. With the HP Pre3 for the most part I kept wifi off all the time because it would interfere with bluetooth. The phone claims from Jan 21 – Feb 21 I used only 136MB of mobile data (I have a 5GB plan – mainly for travel with the phone’s mifi hotspot mode).

Anyway that’s enough for now.

February 2, 2014

Go Seahawks!

Filed under: Random Thought — Tags: , , , — Nate @ 8:47 pm

Go Seahawks!

 

I’m not one for sports really, though I did get interested in NFL back when Seattle first went to the Superbowl in 2005/2006(despite my father being pretty hard core into 49ers and Broncos growing up I never had any interest in football). My interest waned over the years as their performance waned. Though this year was just incredible, I would never of imagined such a season or a Superbowl finish like that.  Living in the Bay Area now I don’t get to see many of their games unless they happen to be playing the Raiders or 49ers. I am surprised(perhaps I shouldn’t be) of how many folks in the Bay Area really hate the Seahawks. Myself I like many teams (mostly west coast teams, 49ers, Raiders, Chargers all inclusive — hell even the Broncos).

The previous two Seahawk games were waay too close for my own comfort I like to see a commanding 10 point lead in any game, I don’t like games won at the last second by a field goal or “one(or two) good play(s)”. I couldn’t of asked for anything more in this Superbowl, such a commanding destruction of the Broncos on both sides of the ball. To be totally honest I was prepared for the Seahawks to lose to the Broncos after the Broncos ripped the Patriots a new one two weeks ago (combined with the previous two Seahawks games being too close). Wow, I’m just totally blown away. I really don’t have words to describe how incredible of a victory that was.

Congrats, I wish I was in Seattle to be at COWGIRLS tonight I know it’s going to be a mad house…….!!!!!!!!!

Hell I’m tempted to drive back up there for Cowgirls next Friday+Saturday, will have to debate that with myself over the coming week.

One thing’s for sure I’m going to have to invest in more Seahawks stuff, I have just two t-shirts that I bought many years ago.

Side note: speaking of those fancy Superbowl ads, I’ve never much cared for any of them. In fact this is the first Superbowl that I can recall that I’ve watched live, I prefer to watch things on at least a two hour delay with Tivo to skip the ads(and halftime).

People don’t understand why I don’t like to watch it live(unless I’m at a bar – in this case I was at a friend of a friend’s house), as much as I can’t understand why they have to watch it live – the results of the game do not change if you don’t see it live. I suppose if your doing betting or something in real time you need to be up to date on the stats, I am not a betting person though (even if for no money  – just not my personality). The NFC championship I ended up sleeping through most of it while it aired – and watched it after it ended. Some folks claim they have to because of social media – for me it’s not hard to just turn off my phone and not use the computer until it’s over. I’m also not much involved in social media to begin with(I don’t see that changing anytime soon the more I see the more I’m turned off by it other than LinkedIn which I feel is good from a professional standpoint). [Update from 2/3/14: I just checked all of the sites that I visit regularly as well as all of the RSS feeds I have and there’s no mention of who won the Superbowl, and nothing in any of my online chats either(mostly work related), so further evidence that my life is fairly isolated from sports in general]

My favorite bar to watch games at in the Bay Area is Rookie’s Lodge down in San Jose (40 minute drive each way for me). My favorite bar ever to watch a game at is Tilted Kilt – specifically the Tilted Kilt in Temecula, CA. They must’ve had a half dozen 100″+ screens (only been to that particular location once a couple of years ago). Though I’m happy to go to any Tilted Kilt (unfortunately the closest one to the Bay Area is in Orange County – I go there whenever I visit my family down there). In Seattle my favorite bar for a game is Sport(there is an Internap data center in the same building which is how I first came across that place). Speakin’ of Tilted Kilt I visited a Twin Peaks when I was in Phoenix on my trip recently. Saw one of their places on Undercover boss at one point. It was nice, lots of TVs(at least a half dozen right in front of me at the bar), good service, though the food menu was lacking compared to Tilted Kilt – which had probably 4-5x more items to choose from.

I wasn’t about to go to a bar to watch this Superbowl(living in the Bay Area), too many folks with negative energy towards the Seahawks (the Seahawks/New Orleans game was bad enough I was at a local bar for that). The group I was with tonight was very calm though(don’t think there were any hard core fans, certainly no team jerseys or anything).  Myself I am an introvert so I don’t yell and scream and stuff when plays happen, I’m typically silent during a game. I clap softly sometimes. I feel the blood pressure rise inside when big plays happen but my nature is to suppress it from an external perspective (happens really no matter how may Jack+Cokes I have).

Older Posts »

Powered by WordPress