TechOpsGuys.com Diggin' technology every day

November 7, 2014

Two factor made easy

Filed under: Random Thought,Security — Nate @ 12:04 am

Sorry been really hammered recently, just spent the last two weeks in Atlanta doing a bunch of data center work(and the previous week or two planning for that trip), many nights didn’t get back to the hotel until after 7AM .. But got most of it done..still have a lot more to do though from remote.

I know there has been some neat 3PAR announcements recently I plan to try to cover that soon.

In the meantime onto a new thing to me: two factor authentication. I recently went through preparations for PCI compliance and among those things we needed two factor authentication on remote access. I had never set up nor used two factor before. I am aware of the common approach of using a keyfob or mobile app or something to generate random codes etc. Seemed kind of, I don’t know, not user friendly.

In advance of this I was reading a random thread on slashdot something related to two factor, and someone pointed out the company Duo Security as one option. The PCI consultants I was working with had not used it and had proposed another (self hosted) option which involved integrating our OpenLDAP with it, along with radius and mysql and a mobile app or keyfob with codes and well it just all seemed really complicated(compounded by the fact that we needed to get something deployed in about a week). I especially did not like the having to type in a code bit. I mean it wasn’t too much before that I got a support request from a non technical user trying to login to our VPN – she would login and the website would prompt her to download & install the software. She would download the software (but not install it) and think it wasn’t working – then try again (download and not install). I wanted something simpler.

So enter Duo Security, a SaaS platform for two factor authentication that integrates with quite a bit of back end things including lots of SSL and IPSec VPNs (and pretty much anything that speaks Radius which seems to be standard with two factor).

They tie it all up into a mobile app that runs on several different major mobile platforms both phone and tablet. The kicker for them is there are no codes. I haven’t seen any other two factor systems personally that are like this (have only observed maybe a dozen or so, by no means am I an expert at this). The ease of use comes in two forms:

Inline self enrollment for our SSL VPN

Initial setup is very simple, once the user types their username and password to login to the SSL VPN (which is browser based of course), an iframe kicks in (how this magic works I do not know) and they are taken through a wizard that starts off looking something like this

duo-choose-device

No separate app, no out of band registration process.

By comparison (what prompted me to write this now) is I just went through a two factor registration process for another company (which requires it now) who uses something called Symantec Validation & ID Protection which is also a mobile app. Someone had to call me on the phone, I told them my Credential ID, and a security code, then I had to wait for the 2nd security code and told them that, and that registered my device with whatever they use.  Compared to Duo this is a positively archaic solution.

Yet another service provider I interact with regularly recently launched (and is pestering me to sign up for) two factor authentication – they too use these old fashioned codes. I’ve been hit with more two factor related things in the past month than in the past probably 5 years or something.

preactivate-duo-mobile-san

Sync your phone with Duo security by scanning a QR code with your phone (obscured the QR code a bit just in case that has sensitive info in it)

By contrast the self enrollment in Duo is simple, requires no interaction on my part, users can enroll whenever they want. They can even register multiple devices on their own, and add/delete devices if they wish.

One of the times during testing I did have an issue scanning the QR code, which normally takes about 2 seconds on my phone. I was struggling with it for a minute or two, until I realized my mouse cursor was on top of it, which was blocking the scan from working. Maybe they could improve it by somehow cloaking the mouse cursor with javascript or something if it goes over the code, I don’t know.

Don’t have a mobile app? Duo can use those same old fashioned key codes too(by their or 3rd party keyfob or mobile app), or they can send you a SMS message, or make a voice call to you (the prompt basically says hit any button on the touch tone phone to acknowledge the 2nd factor — of course that phone# has to be registered with them).

Simply press a button to acknowledge 2nd factor

The other easy part is there is of course no codes to have to transcribe from a device to the computer. If you are using the mobile app, upon login you get a push notification from the app (in my experience more often than not this comes in less than 2 seconds after I try to login). The app doesn’t have to be running (it runs in the background even if you reboot your phone). I get a notification in Android (in my case) that looks like this:

duo-android-sanDuo integrated nicely into Android

I obscured the IP address and the company name just to try to keep this not associated with the company I work for. If you have the app running in the foreground you can see a full screen login request similar to the smaller one above. If for some reason you are not getting the push notification you can use tell the app to poll the Duo service for any pending notifications(only had to do that once so far).

The mobile app also has one of those number generator things so you can use that in the event you don’t have a data connection on the phone. In the event the Duo service is off line you have the option of disabling 2nd factor automatically(default) so them being down doesn’t stop you from getting access, or if you prefer ultra security you can tell the system to prevent any users from logging in if the 2nd factor is not available.

Normally I am not one for SaaS type stuff – really the only exception is if the SaaS provides something that I can’t provide myself. In this case the simple two factor stuff, the self enrollment, the ability to support SMS and phone voice calls(of which about a half dozen of my users have opted to use) is not anything I could of setup in a short time frame anyway (our PCI consultants were not aware of any comparable solution – and they had not worked with Duo before).

Duo claims to be able to setup in just a few minutes – for me the reality was a little different, the instructions they had were only half what I needed for our main SSL VPN, I had to resort to instructions from our VPN appliance maker to make up the difference (and even then I was really confused, until support explained it to me. Their instructions were specifically for two factor on IOS devices though applied to my scenario as well). For us the requirement is that the VPN device talk to BOTH LDAP and Radius. LDAP stores the groups that users belong to, and those groups determine what level of network access they get. Radius is the 2nd factor(or in the case of our IPSec VPN the first factor too more on that in a moment). In the end it took me probably 2-3 hours to figure it out, about half of that was wondering why I couldn’t login(because I hadn’t setup the VPN->LDAP link so the authentication wasn’t getting my group info so I was not getting any network permissions).

So for our main SSL VPN, I had to configure a primary and a secondary authentication, and initially with Duo I just kept it in pass through mode (only talking to them and not any other authentication source) because the SSL VPN was doing the password auth via LDAP.

When I went to hook up our IPSec VPN that was a different configuration, that did not support dual auth of both LDAP and Radius, it could do LDAP group lookups and password auth with radius though.  So I put the Duo proxy in a more normal configuration which meant I needed another Radius server that was integrated with our LDAP(which runs on the same VM as the Duo proxy on a different port) that the Duo proxy could talk to(talks to localhost) in order to authenticate passwords. So the IPSec VPN would send a radius request to the Duo proxy which would then send that information to another Radius (integrated with LDAP) and to their SaaS platform, and give a single response back to allow or deny the user.

At the end of the day the SSL VPN ends up authenticating the user’s password twice (once via LDAP once via RADIUS), but other than being redundant there is no harm.

Here is what the basic architecture looks like, this graphic is more ugly than my official one since I wanted to hide some of the details, you can get the gist of it though

DuoSecurity_Deployment_sanity

Two factor authentication for SSL, IPSec and SSH with redundancy

The SSL VPN supported redundant authentication schemes, so if one Duo proxy was down it would fail back to another one, the problem was the timeout was too long, it would take upwards of 3 minutes to login(and you are in danger of the login timing out). So I setup a pair of Duo proxies and am load balancing between them with a layer 7 health check. If a failure occurs there is no delay in login and it just works better.

As the image shows I have integrated SSH logins with Duo as well in a couple of cases, there is no inline pretty self enrollment, but if you happen to not be enrolled, the two factor process with spit out a url to put into your browser upon first login to the SSH host to enroll in two factor.

I deployed the setup to roughly 120 users a few weeks ago, and within a few days roughly 50-60 users had signed up. Internal IT said there were zero – count ’em zero – help desk tickets related to the new system, it was that easy and functional to use. My biggest concern going into this whole project was tight timelines and really no time for any sort of training. Duo security made that possible (even without those timelines I still would of preferred this solution — or at least this type of solution assuming there is something else similar on the market I am not aware of any).

My only support tickets to-date with them were two users who needed to re-register their devices(because they got new devices). Currently we are on the cheaper of the two plans which does not allow self management of devices. So I just login to the Duo admin portal, delete their phone and they can re-enroll at their leisure.

Duo’s plans start as low as $1/user/month. They have a $3/user/month enterprise package which gives more features. They also have an API package for service providers and stuff which I think is $3/user/year (with a minimum number of users).

I am not affiliated with Duo in any way, not compensated by them, not bribed not given any fancy discounts.. but given I have written brief emails to the two companies that have recently deployed two factor I thought I would write this so I could point them and others to my story here to get more insight on a better way to do two factor authentication.

January 13, 2014

BigCo Security: Fighting a war you cannot win

Filed under: Security — Tags: — Nate @ 10:28 am

It has been somewhat interesting to watch how security vulnerabilities have evolved over the past twenty years or so that I’ve been working with computers anyway. For the most part in the early days security exploits were pretty harmless. Maybe your company got hacked to leverage it’s bandwidth/disk space for pirated software or something like that.

The past several years though the rise in organized cyber crime and highly sophisticated attacks (even attacks from folks that some may consider friendly) is rather alarming. I do feel sorry for those in the security field, especially those at bigger organizations, whom by nature are bigger targets. They are (for the most part) fighting a war they simply cannot win.  Sooner or later they will be breached, and one interesting stat I heard last year at a presentation given by the CTO of Trend Micro was that the average attacker has access to a network to 210 days before being detected.

Companies can spend millions to billions of dollars on equipment, training, and staffing to protect themselves but it’ll never be enough. I mean look no further than the NSA and Snowden? How much did he get away with again? The NSA admits they don’t even know.

I wish the company that sponsored the event had published a video of this CTO presentation as I thought it was the most interesting I had/seen heard in years.  Here is another video from another event that he presented at, also quite good – though not as long as the presentation I saw.

Some details on a highly sophisticated successful attack executed against Korean banks targeting multiple platforms

Some details on a highly sophisticated successful attack executed against Korean banks targeting multiple platforms

The slide above shows a very large scale attack which had more than seventy custom malware packages built for it!

The recent highly sophisticated attacks against Target and Neiman Marcus are of course just the recent high profile examples.

The security of SCADA systems has long been a problem as well.

Over 60,000 exposed control systems found online.

Researchers have found vulnerabilities in industrial control systems that they say grant full control of systems running energy, chemical and transportation systems.

Speaking of industrial control systems, going back to the Trend Micro presentation they mentioned how they purchased some similar equipment to do some testing with. Their first tests involved a water pressure control station connected to the internet and they just watched to see who tried to attack it. This was a real system (not connected to any water source or supporting anybody).

Trend Micro tests who attacks their water pressure control system

Trend Micro tests who attacks their water pressure control system

One of the interesting bits was he noted that although there were a large number of attacks from China most of them were simply probing for information, they were not destructive. I don’t remember who had the destructive attacks I want to say Laos and the U.S. but I could be wrong. He said since this test was so successful they were planning (perhaps already had) to purchase several more of these and place them around the world for monitoring.

I’ve never been too deep in security, I can count on one hand the number of times I’ve had to deal with a compromised system over the past 15 years(most recent one was a couple of months ago). Taking real basic security precautions protects you against a very large number of threats by default(with the most recent attack I dealt with I noted at least three best practices any of which would of prevented the attack from occurring, all of which would of had no impact to the system or application though none were in place), though at the end of the day your best defense against a targeted attack – is don’t be a target to begin with. Obviously that is impossible for big organizations.

The recent DDoS attacks against gaming companies I believe impacted the company I work for, not because we are a gaming company but because we share the same ISP. The ISP responded quite well to the attacks in my opinion and later wrote a letter to all customers describing the attacks – an NTP amplification attack that exceeded 100Gbps in volume, the largest attack they had ever seen. It’s the first DOS attack that has impacted stuff I operate that I’ve ever experienced to my knowledge.

May 21, 2013

SHOCKER! Power grid vulernable to Cyberattack!

Filed under: Security — Tags: , — Nate @ 9:57 pm

Yeah, it shouldn’t be news.. but I guess I am sort of glad it is making some sort of headline. I have written in the past how I think the concept of a smart grid is terrible due to security concerns. I just have no confidence in today’s security technology to properly secure such a system. If we can’t properly secure our bank transactions(my main credit card was compromised for at least the 2nd or 3rd time this year and I am careful), how can anyone expect to secure the grid?

Just saw a new post on Slashdot which points to a new report being released that covers some of how vulnerable we are to attack on our grid.

The report, released ahead of a House hearing on cybersecurity by Congressmen Edward Markey (D-Mass.) and Henry Waxman (D-Calif.), finds that cyberattacks are a daily occurrence, with one power company claiming it fights off 10,000 attempted intrusions each month.

[..]

Such attacks could cut power to large sections of the country and take months to repair.

Oh how I miss the days of early cyber security where the threat was little more than kids poking around and learning. These days there is really little defense against the organized military of the likes of China, sigh.

If they want to get you, most likely they are going to get you.

I’ve had a discussion or two with a friend who works with industrial control systems and the security on those is generally worse than I had heard about with the various breaches around the world.

I don’t see any real value the so called smart grid has, anything remotely resembling gains that would offset the massive growth of the network access points that are connected to the grid.

It’s probably already too late. All security is some form of obscurity at the end of the day whether it is a password, or encryption or physical isolation. Obscuring the grid by reducing the network connections to it has got to provide some level of benefit…

December 2, 2011

New record holder for inefficient storage – VMware VSA

Filed under: Security — Tags: , — Nate @ 11:15 am

I came across this article last night and was honestly pretty shocked, it talks about the limitations of the new VMware Virtual Storage Appliance that was released along side vSphere 5. I think it is the second VSA to receive full VMware certification after the HP/Lefthand P4000.

The article states

[..]
Plus, this capacity will be limited by a 75% storage overhead requirement for RAID data protection. Thus, a VSA consisting of eight 2 TBs would have a raw capacity of 16 TB, but the 75% redundancy overhead would result in a maximum usable capacity of 4 TB.

VMware documentation cites high availability as the reason behind VSA’s capacity limitations: “The VSA cluster requires RAID10 virtual disks created from the physical disks, and the vSphere Storage Appliance uses RAID1 to maintain the VSA datastores’ replicas,” resulting in effective capacity of just 25% of the total physical hard disk capacity.
[..]

That’s pretty pathetic! Some folks bang on NetApp for being inefficient in space, I’ve ragged on a couple of other folks for the same, but this VSA sets a new standard. Well there is this NEC system with 6%, though in NEC’s case that was by choice. The current VSA architecture forces the low utilization on you whether you want it or not.

I don’t doubt that VMware released the VSA “because they could”, I’m sure they designed it primarily for their field reps to show off the shared storage abilities of vSphere from laptops and stuff like that (that was their main use of the Lefthand VSA when it first came out at least), given how crippled the VSA is(it doesn’t stop at low utilization see the article for more), I can’t imagine anyone wanting to use it – at any price.

The HP Lefthand VSA seems like a much better approach – it’s more flexible, has more fault tolerance options, and appears to have an entry level price of about half that of the VMware VSA.

The only thing less efficient that I have come across is utilization in Amazon EC2 – where disk utilization rates in the low single digits are very common due to the broken cookie cutter design of the system.

September 2, 2011

EMC’s Server strategy: use our arrays?

Filed under: Security — Tags: — Nate @ 8:13 am

I just read this from our friends at The Register. I just have one question after reading it

Why?

Why would anyone want to use extremely premium CPU/Memory resources on a high end enterprise storage system to run virtual servers on? What’s the advantage? You could probably buy a mostly populated blade enclosure from almost everyone for the cost of a VMAX controller.

If EMC wants in on the server-based flash market they should just release some products of their own or go buy one of the suppliers out there.

If EMC wants to get in on the server business they should just do it, don’t waste people’s time on this kinda stuff. Stupid.

 

May 5, 2011

Sony Compromised by Apache bug?

Filed under: General,Security — Tags: , , — Nate @ 10:26 am

Came across an article from a friend that talks about how Sony thinks they were compromised.

According to Spafford, security experts monitoring open Internet forums learned months ago that Sony was using outdated versions of the Apache Web server software, which “was unpatched and had no firewall installed.”

The firewall part is what gets me. Assuming of course this web server(s) were meant to be public, no firewall is going to protect you against this sort of thing since of course firewalls protecting public web servers have holes opened explicitly for the web server so all traffic is passed right through.

And I highly doubt those Apache web servers had confidential data as the article implies, obviously that data was on back end systems running databases of some sort.

Then there are people out there spouting stuff on PCI saying the automated external scans should of detected they were running outdated versions of software. In my experience such scans are really not worth much with Linux, primarily because they have no way to take into account patches that are back ported to the operating system. I’ve had a few arguments with security scanners trying to explain how a system is patched because the fix was back ported but them not being able to comprehend that because the major/minor version being reported by the server has not changed.

Then there was the company I worked for who had a web app that returned a HTTP/200 for pretty much everything, including things like 404s. This tripped every single alarm the scanners had, and they went nuts. And once again we had to explain that those windows exploits aren’t going to work against our Apache Tomcat systems running Linux.

IDS and IPS are overrated as well, unless you really have the staff to watch and manage it full time. In all of the years I have worked at companies that deployed some sort of IDS (never IPS), I have seen it work, one time, back in I want to say 2002, I saw a dramatic upsurge in some type of traffic on our Snort IDS at the time from one particular host and turns out it had a virus on it. I worked at one company that was compromised at LEAST twice while I was there(on systems that weren’t being properly managed). and of course the IDS never detected a thing. Then that company deployed(after I left) a higher end hardware-based IPS, and when they put it inline to the network (in passive, not enforcing mode) for some reason the IPS started dropping all SSL traffic for no reason.

They aren’t completely useless though, they can help detect and sometimes protect against the more obvious types of attacks (SQL injection etc).  But in the grand scheme of things, especially when dealing with customized applications (not off the shelf like Exchange, Oracle or whatever), IDS/IPS and even firewalls provide only a tiny layer of additional security on top of good application design, good deployment practices(e.g. don’t run as root, disable or remove subsystems that are not used, such as the management app in Tomcat, use encryption where possible), and a good authentication system for system level access (e.g. ssh keys). With regards to web applications, a good load balancer is more than adequate to protect the vast majority of applications out there, it is “firewall like” as in it only passes certain ports to the back end systems, but (for higher traffic sites this is important) vastly outperforms firewalls, which can be a massive bottleneck for front end systems.

With regards to the company that was compromised at least twice, the intrusion was minor and limited to a single system, the compromise occurred because the engineer who installed the system put it outside of the load balancers, it was a FTP server, or was it a monitoring server, I forgot.  Because it needed to be accessed externally the engineer thought hey let’s just put it on the internet. Well it sat there for a good year or two, (never being patched in the meantime) before I joined the company, compromised in some fashion, and ssh was replaced with a trojaned copy (it was pretty obvious, I am assuming it was some sort of worm exploiting ssh). It had all sorts of services running on it. I removed the trojan’d ssh, asked the engineer if he thought there might be an issue, he said he didn’t believe so. So I left it, until a few weeks later that trojan’d ssh came back. And at that point I shut the ethernet interfaces on the box off until it could be retired. There was no technical reason that it could not run behind the load balancer.

If you really need a front end firewall, consider a load balancer that has such functionality built in, because at least you have the ability to decrypt incoming SSL traffic and examine it, something very few firewall or IDS/IPS systems can do (another approach some people use is to decrypt at the load balancer than mirror the decrypted traffic to the IDS/IPS, but that is less secure of course).

It really does kind of scare me though that people seem to blindly associate a firewall with security, especially when it’s a web server that is running. Now if those web servers were running RPC services and were hacked that way, a firewall very likely could of helped.

One company I worked at, my boss insisted we have firewalls in front of our load balancers, I couldn’t convince him otherwise, so we deployed them. And they worked fine(for the most part). But the configuration wasn’t really useful at all, basically we had a hole open in the firewall that pointed to the load balancer, which then pointed to the back end systems. So the firewall wasn’t protecting anything that the load balancer wasn’t doing already, a needless layer of complexity that didn’t benefit anyone.

Myself I’m not convinced they were compromised via an Apache web server exploit, maybe they were compromised via an application running on top of Apache, but these days it’s really rare to break into any web server directly via the web server software(whether it’s Apache, IIS or whatever). I suspect they still don’t really know how they were compromised and some manager at Sony pointed to that outdated software as the cause just so they could complete their internal processes on root cause and move on. Find something to tell congress, anything that sounds reasonable!!

December 12, 2010

OpenBSD installer: party like it’s 2000

Filed under: linux,Random Thought,Security — Tags: , , — Nate @ 12:07 am

[Random Thought] The original title was going to be “OpenBSD: only trivial changes in the installer in one heck of a long time” a take off of their blurb on their site about remote exploits in the default install.

I like OpenBSD, well I like it as a firewall — I love pf. I’ve used ipchains, iptables, ipfwadm, ipf (which I think pf was originally based off of and was spawned due to a licensing dispute with the ipf author(s)), ipfw, Cisco PIX and probably one or two more firewall interfaces, and pf is far and away the best that I’ve come across.  I absolutely detest Linux’s firewall interfaces by contrast, going all the way back almost 15 years now.

I do hate the OpenBSD user land tools though, probably as much as the *BSD folks hate the Linux user land tools. I mean how hard is it to include an init script of sorts to start and stop a service? But I do love pf, so in situations where I need a firewall I tend to opt for OpenBSD wherever possible (when not possible I don’t resort to Linux, I’d rather resort to a commercial solution perhaps a Juniper Netscreen or something).

But this isn’t about pf, or user land. This is about the OpenBSD installer. I swear it’s had only the most trivial changes and improvements done to it in at least the past 10 years, when I first decided to try it out. To me it is sad, the worst part about it is of course the disk partitioning interface. It’s just horrible.

I picked up my 2nd Soekris net5501 system and installed OpenBSD 4.8 on it this afternoon, and was kind of sadened, yet not surprised how it still hasn’t changed. I have my other Soekris running OpenBSD 4.4 and has been running for a couple years now. First used pf I believe back in about 2004 or so, so have been running it quite a while, nothing too complicated, it’s really simple to understand and manage. My first experience with OpenBSD was I believe back in 2000, I’m not sure but I want to say it was something like v2.8. I didn’t get very far with it, for some reason it would kernel panic on our hardware after about a day or so of very light activity, so went back to Linux.

I know pf has been ported to FreeBSD, and there is soon to be a fully supported Debian kFreeBSD distribution with the next major release of Debian whenever that is, so perhaps that will be worth while switching to for my pf needs, I don’t know. Debian is another system which has been criticized over the years for having a rough installer, though I got to say in the past 4-5 years it really has gotten to be a good installer in my opinion. As a Debian user for more than 12 years now it hasn’t given me a reason to switch away from it, but I still do prefer Red Hat based distros for “work” stuff.

First impressions are important, and the installer is that first impression. While I am not holding out hope they will improve their installer, it would be nice.

July 28, 2010

Vulnerable Smart grid (again)

Filed under: Security — Nate @ 8:09 am

A while back I wrote an entry about the vulnerable smart grid, nothing has changed of course but there is a new article from The Register touting a new report that once again warns about security issues with the smart grid.

[..]

However, Ross Anderson, professor in security engineering at the University of Cambridge Computer Laboratory, warns that the move to smart metering introduces a “strategic vulnerability” that hackers might conceivable be exploit to remotely switch off elements on the gas or electricity supply grid.

[..]

The rollout of an estimated 47 million smart meters to each of the UK’s 26 million homes by 2020 is estimated at costing around £8bn.

The only issue with the statement I have is the word might. Given the maturity of organized computer criminals out there whether they are individuals, organizations or backed by governments you know they will exploit this stuff it’s only a matter of time, and I think the time required is not much more time than it will take to deploy the smart grid itself. The only question is how much damage can they do, could they go so far as to disable the power grid and brick the smart grid devices themselves forcing a wholesale replacement? That is probably a worst case thing.

This is what happens when people who don’t know much of anything about technology are put in charge of using it. It’s a pretty scary thought, given the scale of these smart grid deployments and the amount of hype surrounding them.

March 28, 2010

Vulnerable Smart Grid

Filed under: News,Security — Nate @ 9:27 am

As some of you who know me may know, I have been against the whole concept of a “smart grid” for a few years now. The main reason behind this is security. The more intelligence you put into something especially with regards to computer technology the more complex it becomes, the more complex it becomes the harder it is to protect.

Well it seems the main stream media has picked up on this with an article from the AP

SAN FRANCISCO – Computer-security researchers say new “smart” meters that are designed to help deliver electricity more efficiently also have flaws that could let hackers tamper with the power grid in previously impossible ways.

Kind of reminds me of the RFID-based identification schemes that have been coming online in the past few years, just as prone to security issues. In the case of the smart grid, my understanding of it is that the goal is to improve energy efficiency by allowing the power company to intelligently inform downtream customers of power conditions so that things like heavy appliances can be proactively turned off in the event of a surge in usage to prevent brown and blackouts.

Sounds nice in theory, like many things, but as someone who has worked with technology for about 20 years now I see the quality of stuff that comes out of companies, and I just have no confidence that such technonlogy can be made “secure” at the same time it can be made “cost effective”. At least not at our current level of technological sophistication, I mean from an evolutionary standpoint “technology” is still a baby, we’re still figuring stuff out, it’s brand new stuff. I don’t mean to knock any company or organization in particular, they are not directly at fault, I just don’t believe – in general technology is ready for such a role, not in a society such as ours.

Today in many cases you can’t get a proper education in modern technology because the industries are moving too fast for the schools to keep up. Don’t get me started on organizations like OLPC and others trying to pitch laptop computers to schools in an attempt to make education better.

If you want to be green, in my opinion, get rid of the coal fired power plants. I mean 21st century and we still have coal has generating roughly half(or more) of our electricity ? Hasn’t anyone played Sim City?

Of course this concept doesn’t just apply to the smart grid, it applies to everything as our civilization tries to put technology to work to improve our lives. Whether it’s wifi, rfid, or online banking, all of these(and many others) expose us to significant security threats, when not deployed properly, and in my experience, from what I have seen, the numbers of implimentations that are not secure outnumber the ones that are by probably 1000:1. So we have a real significant trend of this in action(technology being deployed then being actively exploited). I’m sure you agree that our power grid is a fairly important resource, it was declared the most important engineering achievement of the 20th century.

While I don’t believe it is possible yet, we are moving down the road where scenes like those portrayed in the movie Eagle Eye (saw it recently had it on my mind), will be achievable, especially now that many nations have spun up formal hacker teams to fight future cyber wars, and you have to admit, we are a pretty tempting target.

There will be a very real cost to this continued penetration of technology into our lives. In the end I think the cost will be too high, but time will tell I guess.

You could say I long for the earlier days of technology where for the most part security “threats” were just people that wanted to poke around in systems, or compromise a host to “share” it’s bandwidth and disk space to host pirated software. Rarely was there any real malice behind any of it, not true anymore.

And for those that are wondering – the answer is no. I have never, ever had a wireless access point hooked to my home  network, and I do my online banking from Linux.

Powered by WordPress