To-date I have not been too excited about HP's Moonshot system, I've been far more interested in AMD's Seamicro. However HP has now launched a Moonshot based solution that does look very innovative and interesting.
HP Moonshot: VDI Edition (otherwise known as HP ConvergedSystem 100 for hosted desktops) takes advantage of (semi ironically enough) AMD APUs which combine CPU+GPU in a single chip and allows you to host up to 180 users in a 4.3U system each with their own dedicated CPU and GPU! The GPU being the key thing here, that is an area where most VDI has fallen far short.
Everything as you might imagine is self contained within the Moonshot chassis, there is no external storage.
EACH USER gets:
- Quad core 1.5Ghz CPU
- 128 GPU Cores (Radeon of course)
- 8GB RAM (shared with GPU)
- 32GB SSD
- Windows 7 OS
That sounds luxurious! I'd be really interested to see how this solution stacks up against competing VDI solutions.
They claim you can be up and going in as little as two hours -- with no hypervisor. This is bare metal.
You probably get superior availability as well given there are 45 different cartridges in the chassis, if one fails you lose only four desktops. The operational advantages(on paper at least) for something like this seem quite compelling.
I swear it seems a day doesn't go by when a SSD storage vendor touts their VDI cost savings etc (and they never seem to mention things like, you know servers, LICENSING, GPUs, etc etc - really annoys me).
VDI is not an area I have expertise in but I found this solution very interesting, and it seems like it is VERY dense at the same time.
HP doesn't seem to get specific on power usage other than you save a lot vs desktop systems. The APUs themselves seem to be rated at 15W/ea on the specs, which implies a minimum power usage of 2,700W. Though it seems each power supply in the Moonshot has a rated steady-rate power output of 653W, with four of those that is 2,600W for the whole chassis, though HP says the Moonshot supports only 1200W PSUs, so it's sort of confusing. The HP Power Advisor has no data for this module.
It wouldn't surprise me if the power usage was higher than a typical VDI system, but given the target workload(and the capabilities offered) it still sounds like a very compelling solution.
Obviously the question is might AMD one-up HP at their own game given that AMD owns both these APUs and SeaMico, and if so might that upset HP?
First off, sorry about the lack of posts, there just hasn't been very much in tech that has inspired me recently. I'm sure part of the reason is my job has been fairly boring for a long time now, so I'm not being exposed to a whole lot of stuff. I don't mind that trade off for the moment - still a good change of pace compared to past companies. Hopefully 2014 will be a more interesting year.
3PAR still manages to create some exciting news every now and then and they seem to be on a 6-month release cycle now, far more aggressive than they were pre acquisition. Of course now they have far more resources. Their ability to execute really continues to amaze me, whether it is on the sales or on the technology side. I think technical support still needs some work though. In theory that aspect of things should be pretty easy to fix it's just a matter of spending the $$ to get more good people. All in all though they've done a pretty amazing job at scaling 3PAR up, basically they are doing more than 10X the revenue they had before acquisition in just a matter of a few short years.
This all comes from HP Discover - there is a bit more to write about but per usual 3PAR is the main point of interest for myself.
Turbo-charging the 3PAR 7000
Roughly six months ago 3PAR released their all-flash array the 7450. Which was basically a souped up 7400 with faster CPUs, double the memory, optimized software for SSDs and a self imposed restriction that they would only sell it with flash(no spinning rust).
At the time they said they were still CPU bound and that their in house ASIC was nowhere near being taxed to the limit. Simultaneously they could not put more (or more powerful) CPUs in the chassis due to cooling restraints in the relatively tiny 2U package that a pair of controllers come in.
Given the fine grained software improvements they released earlier this year I (along with probably most everyone else) was not expecting that much more could be done. You can read in depth details, but highlights included:
- Adaptive read caching - mostly disabling read caching for SSDs, at the same time disabled prefetching of other blocks. SSDs are so fast that there is little benefit to doing either. Not caching reads to SSDs has a benefit of dedicating more of the cache to writes.
- Adaptive write caching - with disks 3PAR would write an entire 16kB block to disk because there is no penalty for doing so. With SSDs they are much more selective in only writing the small blocks that changed, they will not write 16kB if only 4kB has changed because there is no penalty with SSDs like there are with disks.
- Autonomic cache offload - More sophisticated cache management algorithms
- Multi tenant improvements - Multi threaded cache flushing, breaking up large sequential I/O requests into smaller chunks for the SSDs to ingest at a faster rate. 3PAR has always been about multi tenancy.
Net effect of all of these are more effective IOPS and throughput, more efficiency as well.
With these optimizations, the 7450 was rated at roughly 540,000 IOPS @ 0.6ms read latency (100% read). I guesstimated based on the SPC-1 results from the 7400 that a 7450 could perhaps reach around 410,000 IOPS. Just a guess though..
So imagine my surprise when they come out and say the same system with the same CPUs, memory etc is now performing at a level of 900,000 IOPS with a mere 0.7 milliseconds of latency.
The difference? Better software.
Mid range I/O scalability
|3PAR F200 |
(Seems low given
|Not possible||Not possible||Not|
Stop interrupting me
What allowed 3PAR to reach this level of performance is by leveraging a PCI-express feature called Message Signaled Interrupts, or MSI-X which Wikipedia describes as:
MSI-X (first defined in PCI 3.0) permits a device to allocate up to 2048 interrupts. The single address used by original MSI was found to be restrictive for some architectures. In particular, it made it difficult to target individual interrupts to different processors, which is helpful in some high-speed networking applications. MSI-X allows a larger number of interrupts and gives each one a separate target address and data word. Devices with MSI-X do not necessarily support 2048 interrupts but at least 64 which is double the maximum MSI interrupts.
I'm not a hardware guy to this depth for sure. But I did immediately recognize MSI-X from a really complicated troubleshooting process I went through several years ago with some Broadcom network chips on Dell R610 servers (though the issue wasn't Dell specific). It ended up being a bug with how the Broadcom driver was handling(or not) MSI-X (Redhat bug here). It took practically a year of (off and on) troubleshooting before I came across that bug report. The solution was to disable MSI-X via a driver option (which apparently the Dell supplied drivers came with by default, the OS-supplied drivers did not have that disabled by default).
So some fine grained kernel work improving interrupts gave them a 1.6 fold improvement in performance.
This performance enhancement applies to the SAS-based 3PAR 7000-series only, the 10000-series had equivalent functionality already in place, and the previous generations (F/T platforms) are PCI-X based(and I believe are all in their end of life phases), and this is a PCI Express specific optimization. I think this level of optimization might really only help SSD workloads as they push the controllers to the limit, unlike spinning rust.
This optimization also reduces the latency on the system by 25%, because the CPU is no longer being interrupted nearly as often it can no only do more work but do the work faster too.
Give me more!
There are several capacity improvements here as well.
There are new 480GB and 920GB SSDs available, which takes the 3PAR 4-node 7400/7450 to a max raw capacity of 220TB (up from 96TB) on up to 240 SSDs.
Bigger entry level
The 3PAR 7200's spindle capacity is being increased by 60% - from 144 drives to 240 drives. The 7200 is equipped with only 8GB of data cache (4GB per controller - it is I believe the first/only 3PAR system with more control cache than data cache), though it still makes a good low cost bulk data platform with support for up to 400TB of raw storage behind two controllers(which is basically the capacity of the previous generation's 4-node T400 which had 48GB of data cache, 16GB of control cache, 24 CPU cores, 4 ASICs -- obviously the T400 had a much higher price point!).
Not a big shocker here just bigger drives - 4TB Nearline SAS is now supported across the 7k and 10k product lines, bringing the high end 10800 array to support 3.2PB of raw capacity, and the 7400 sporting up to 1.1PB now. These drives are obviously 3.5" so on the 7000 series you'll need the 3.5" drive cages to use them - the 10k line uses 3PAR's custom enclosures which support both 2.5" and 3.5" (though for 2.5" drives the enclosures are not compact like they are on 7k).
I was told at some point that the 3PAR OS would start requiring RAID 6 on volumes that were on nearline drives at some point - perhaps that point is now(I am not sure). I was also told you would be able to override this at an OS level if you wish, the parallel chunklet architecture recovers from failures far faster than competing architectures. Obviously with the distributed architecture on 3PAR you are not losing any spindles to dedicated spares nor dedicated parity drives.
If you are really paranoid about disk failures you can on a per-volume basis if you wish use quadruple mirroring on a 3PAR system - which means you can lose up to 75% of the disks in the system and still be OK on those volume(s).
3PAR also uses dynamic sparing -- if the default spare reserve space runs out, and you have additional unwritten capacity(3PAR views capacity as portions of drives, not whole drives) the system can sustain even more disk failures without data loss or additional overhead of re-configuration etc.
Like almost all things on 3PAR the settings can be changed on the fly without application impact and without up front planning or significant effort on the part of the customer.
The 3PAR 10400 has received a memory boost - doubling it's memory configuration from the original configuration. Basically it seems like they decided it was a better idea to unify the 10800 and 10400 controller configurations, though the data sheet seems to have some typos in it(pending clarification). I believe the numbers are 96GB of cache per controller (64GB data, 32GB control), giving a 4-node system 384GB of memory.
Compare this to the 7400 which has 16GB of cache per controller (8GB data, 8GB control) giving a 4-node system 64GB of memory. The 10400 has six times the cache, and still supports 3rd party cabinets.
Now if they would just double the 7200 and 7400's memory that would be nice
Keeps getting better
Multi tenant improvements
Six months ago 3PAR released their storage quality of service software offering called Priority Optimization. As mentioned before 3PAR has always been about multi tenancy, and due to their architecture they have managed to do a better job at it than pretty much anyone else. But it still wasn't perfect obviously - there was a need for real array based QoS. They delivered on that earlier this year and now have announced some significant improvements on that initial offering.
Brief recap of what their initial release was about - you were able to define both IOP and bandwidth threshold levels for a particular volume(or group of volumes), the system would respond basically in real time to throttle the workload if it exceeded that level. 3PAR has tons of customers that run multi tenant configurations so they went further in being able to define both a customer as well as an application.
So as you can see from the picture above, the initial release allowed you to specify say 20,000 IOPS for a customer, and be able to over provision IOPS for individual applications that customer uses, allowing for maximum flexibility, efficiency and control at the same time.
So the initial release was all about basically rate limiting workloads on a multi tenant system. I suppose you could argue that there wasn't a lot of QoS it was more rate limiting.
The new software is more QoS oriented - going beyond rate limiting they now have three new capabilities:
- Allows you to specify a performance minimum threshold for a given application/customer
- Allows you to specify a latency target for a given application
- Using 3PAR's virtual domains feature(basically carve a 3PAR up into many different virtual arrays for service providers) you can now assign a QoS to a given virtual domain! That is really cool.
Like almost everything 3PAR - configuring this is quite simple and does not require professional services.
3PAR Replication: M to N
With the latest software release 3PAR now supports M to N topologies for replication. Before this they supported 1 to 1, as well as synchronous long distance replication
All configurable via point and click interface no less, no professional services required.
New though is M to N.
Need Bigger? How about nine arrays all in some sort of replication party? That's a lot of arrows.
More scalable replication
On top of the new replication topology they've also tripled(or more) the various limits around the maximum number of volumes that can be replication in the various modes. A four node 3PAR can now replicate up to a maximum of 6,000 volumes in asynchronous mode and 2,400 volumes in synchronous mode.
You can also run up to 32 remote copy fibre channel links per system and up to 8 remote copy over IP links per system (RCIP links are dedicated 1GbE ports on each controller).
Peer motion enhancements
Peer motion is 3PAR's data mobility package which allows you to transparently move volumes between arrays. It came out a few years ago primarily as a means to provide ease of migration/upgrade between 3PAR systems, and was later extended to support EVA->3PAR migrations. HP's StoreVirtual platform also does peer motion, though as far as I know it is not yet directly inter-operable with 3PAR. Not sure if it ever will be.
Anyway like most sophisticated things there are always caveats - the most glaring of which in peer motion is they did not support SCSI reservations. Which basically means you couldn't use peer motion with VMware or other clustering software. With the latest software that limitation has been removed! VMware, Microsoft and Redhat clustering are all supported now.
Persistent port enhancements
Persistent ports is an availability feature 3PAR introduced about a year ago which basically leverages NPIV at the array level - it allows a controller to assume the Fibre Channel WWNs of it's peer in the event the peer goes offline. This means fail over is much faster, and it removes the dependency of multi pathing software to provide for fault tolerance. That's not to say that you should not use MPIO software you still should if for nothing else other than better distribution of I/O across multiple HBAs, ports and controllers. But the improved recovery times are a welcome plus.
So what's new here?
- Added support for FCoE and iSCSI connections
- Laser loss detection - in the event a port is disconnected persistent ports kick in (don't need to have a full controller failure)
- The speed at which the fail over kicks in has been improved
Combine Persistent Ports with 3PAR Persistent cache on a 4-8 controller system and you have some pretty graceful fail over capabilities.
3PAR Persistent Cache was released back in 2010 I believe, no updates here, just put the reference here for people that may not know what it is since it is a fairly unique ability to have especially in the mid range.
Also being announced is a new set of FIPS 140-2 validated self encrypting drives with sizes ranging from 450GB 10k to 4TB nearline.
3PAR also has a 400GB SSD encrypting drive as well though I don't see any mention of FIPS validation on that unit.
3PAR arrays can either be encrypted or not encrypted - they do not allow you to mix/match. Also once you enable encryption on a 3PAR array it cannot be disabled.
I imagine you probably aren't allowed to use Peer Motion to move data from an encrypted to a non encrypted system? Same goes for replication ? I am not sure, I don't see any obvious clarifications in the docs.
SSDs, like hard drives all come with a chunk of hidden storage set aside for when blocks wear out or go bad, the disk transparently re-maps from this spare pool. I think SSDs take it to a new level with their wear leveling algorithms.
Anyway, 3PAR's Adaptive sparing basically allows them to utilize some of the storage from this otherwise hidden pool on the SSDs. The argument is 3PAR is already doing sparing at the sub-disk (chunklet) level, if a chunklet fails then it is reconstructed on the fly - much like a SSD would do to itself if a segment of flash went bad. If too many chunklets fail over time on a disk/SSD the system will pro actively fail the device.
At the end of the day the customer gets more usable capacity out of the system without sacrificing any availability. Given the chunklet architecture I think this approach is probably going to be a fairly unique capability.
Lower cost SSDs
Take Adaptive sparing, and combine it with the new SSDs that are being released and you get SSD list pricing(on a per GB basis) which is reduced by 50%. I'd really love to see an updated SPC-1 for the 7450 with these new lower cost devices(plus MSI-X enhancements of course!), I'd be surprised if they weren't working on one already.
3PAR came out with their first web services API a year ago. They've since improved upon that, as well as adding enhancements for Openstack Havana (3PAR was the reference implementation for Fibre Channel in Openstack).
3PAR is continuing to kick butt in the market place with their 7000-series, with El Reg reporting that their mid range products have had 300% year over year increases in sales and they have overtaken IBM and NetApp in market share to be #2 behind EMC (23% vs 17%).
This might upset the ethernet vendors but they also report that fibre channel is the largest and fastest growing storage protocol in the mid range space(at least year over year), I'm sure again largely driven by 3PAR who historically has been a fibre channel system. Fibre channel has 50% market share with 49% year over year growth.
Well the elephant in the room that is still not here is some sort of SSD-based caching. HP went so far as to announce something roughly a year ago with their SmartCache technology for Gen8 systems, though they opted not to mention much on that this time around. It's something I have hounded 3PAR for the past four years to get going, I'm sure they are working on something......
Also I would like to see them support, or at least explain why they might not support, the Seagate Enterprise Turbo SSHD - which is a hybrid drive providing 32GB of eMLC flash cache in front of what I believe is an otherwise 10k RPM 300-600GB disk with self proclaimed upwards of 3X improvement in random I/O over 15k disks. There's even a FIPS 140-2 model available. I don't know what the price point of this drive is but find it hard to believe that it's not a cost effective alternative to flash tiering when you do not have a flash-based cache to work off of.
Lastly I would like to see some sort of automatic workload load balancing with Peer motion - as far as I know that does not yet exist. Though moving TBs of data around between arrays is not something to be taken lightly anyway!
Sorry for slackin off recently, there just hasn't been a whole lot out there that has gotten me fired up.
Not too long ago I ranted a bit about outages. Basically saying if your site is down for a few hours, big whoop. It happens to everyone. The world is not going to end, your not going to go out of business.
Now if your website is down for a week or multiple weeks the situation is a bit different. I saw on a news broadcast that experts had warned the White House that the new $600M+ healthcare.gov web site was not ready. But the people leading the project, as it seems so typical probably figured the claims were overblown (are they ever? in my experience they have not been - though I've never been involved in a $600M project before, or anywhere close to it) and decided to press onwards regardless.
So they had some architecture issues, some load issues, capacity problems etc. I just thought to myself - this problem really sounds easy to solve from a technical standpoint. They tried to do this to some extent(and failed) apparently with various waiting screens. There are some recent reports that longer term fixes may take weeks to months.
I've been on the receiving end of some pretty poorly written/designed applications that it didn't really matter how much hardware you had it flat out wouldn't scale. I remember one situation in particular during an outage of some kind and the VP of Engineering interrupted us on the conference call and said Guys - is there anything I can buy that would make this problem go away? The answer back to him was No. At this same company we had Oracle - obviously a big company in the database space come to our company and tell us they had no other customers in the world doing what we were doing, and they could not guarantee results. Storage companies were telling us the same thing. Our OLTP database at the time was roughly 8 times the next largest Oracle OLTP database in the world (which was Amazon). That was, by far the most over designed application I've ever supported. It was an interesting experience, I learned a lot. Most other applications that I have supported suffered pretty serious design issues, though none were quite as bad as this one company in particular.
My solution is simple - go old school, take a number and notify people when they can use the website.
Write a little basic app, point healthcare.gov to it, allow people to register with really basic info like name and email address (or phone# if they prefer to use SMS). This would be an entirely separate application not part of the regular web site. This is really light weight application, perhaps even store it in some noSQL solution(for speed) because worst case if you lose the data they'll just have to come back and register again.
So part of the registration the site would say we'll send you an email or SMS when your turn is up, with a code, and you'll have a 24 hour window in which to use the site (past that and you have to register for a new number). If they can get the infrastructure done perhaps they could even have an automated phone system give them a call as well.
Then simply only allow a fraction of the # of people at a time on the website that the system can handle, if they built it for 50,000 people at a time I would probably start with 20,000 the first day or two and see how it goes(20,000 people per day not 20,000 simultaneous). Then ramp it up, if the application is scaling ok. As users register successfully the other application sees this and the next wave of notifications is sent. Recently I heard that officials were recommending people sign up through the call center(s), which I suppose is an OK stop gap but can't imagine the throughput is very high there either.
I figure it may take a team of developers a few days to come up with such an app.
Shift the load of people trying to hit an expensive application over and over again to a really basic high performance registration application, and put the expensive application behind a barrier requiring an authentication code.
IMO they should of done this from the beginning, perhaps even in advance generating times based on social security numbers or something.
All of this is really designed to manage the flood of initial registrations, once the tidal wave is handled then open the web site up w/o authentication anymore.
There should be a separate, static, high speed site(on many CDNs) that has all of the information people would need to know when signing up, again something that is not directly connected to the transactional system. People can review this info in advance and that would make sign ups faster.
Last week Verizon made big news in the cloud industry that they were shifting gears significantly and were not going to have their clouds built on top of traditional enterprise equipment from the likes of HP, Cisco, EMC etc.
I can't find an article on it but I recall hearing on CNBC that AT&T announced something similar - that was going to result in them in saving $2 billion over some period of time that I can't remember.
Today our friends at The Register reveal that this design win actually comes from AMD's Seamicro unit. AMD says they have been working closely with Verizon for two years on designs for a highly flexible and efficient platform to scale with.
Seamicro has a web page dedicated to this announcement.
Some of the capabilities include:
- Fine-grained server configuration options that match real life requirements, not just small, medium, large sizing, including processor speed (500 MHz to 2,000 MHz) and DRAM (.5 GB increments) options
- Shared disks across multiple server instances versus requiring each virtual machine to have its own dedicated drive
- Defined Storage quality of service by specifying performance up to 5,000 IOPS to meet the demands of the application being deployed, compared to best-effort performance
- Strict traffic isolation, data encryption, and data inspection with full featured firewalls that achieve Department of Defense and PCI compliance levels
- Reserved network performance for every virtual machine up to 500 Mbps
I don't see much more info than that. Questions that remain with me are what level of SMP will they support, and what processor(s) are they using (specifically are they using AMD procs or Intel procs since Seamicro can use both, Intel has obviously been dominating the cloud landscape, so it would be nice to see a new large scale deployment of AMD).
I have written about SeaMicro a couple times in the past, most recently comparing HP's Moonshot to the AMD platform. In those posts I mentioned how I felt that Moonshot fell far short of what Seamicro seems to be capable of offering. Given Verizon's long history as a customer of HP, I can't help but assume that HP tried hard to get them to consider Moonshot but fell short on the technology(or timing, or both).
Seamicro, to my knowledge (I don't follow micro servers too closely) is the only micro server platform that offers fully virtualized storage, both inside the chassis as well as more than 300TB of external storage. One of the unique abilities that sounds nice for larger scale deployments is the ability to export essentially read only snapshots of base operating systems to many micro servers for easier management(and you could argue more secure given they are read only), without needing fancy SAN storage. It's also fairly mature (relatively to the competition) given it's been on the market for several years now.
Verizon/Terremark obviously had some trouble competing with the more commodity players with their enterprise platform both on cost and on capabilities. I was a vCloudExpress user for about a year, and worked through an RFP with them at one of my former companies for a disaster recovery project. Their cost model, like most cloud providers was pretty insane. The assumption we had at the time is we were a small company without much purchasing leverage, so expected the cost to be pretty decent given the volumes a cloud provider can command. Though reality set in quick when their cost was at least 5-6 fold what our cost was for the same capabilities from similar enterprise vendors.
Other providers had similar pricing models, and I continue to hear stories to this day about various providers costing too much relative to doing things in house (there really is no exception), with ROIs really never exceeding 12 months. I think I've said many times but I'll say it again - I'll be the first one to be willing to pay a premium for something that gives premium abilities. None of them come close to meeting that though. Not even in the same solar system at this point.
This new platform will certainly make Verizon's cloud offering more competitive, they are having to build an entirely new control platform for it though - not much off the shelf software here, simply because none of it is built to that level of scale. Such problems are difficult to address, and until you encounter them you probably won't anticipate what is required to solve them.
I am mainly curious whether or not these custom things that AMD built for Verizon -- if those will be available to other cloud players. I assume they will..
You can certainly count me as in the camp of folks that believed RIM/Blackberry had a chance to come back. However more recently I no longer feel this is possible.
While the news today of Blackberry possibly cutting upwards of 40% of their staff before the end of the year, is not the reason I don't think it is possible, it just gave me an excuse to write about something..
The problem stems mainly from the incredibly fast paced maturation (can't believe I just used that word) of the smart phone industry especially in the past three years. There was an opportunity for the likes of Blackberry, WebOS, and even Windows Phone to participate but they were not in the right place at the right time.
I can speak most accurately about WebOS so I'll cover a bit on that. WebOS had tons of cool concepts and ideas, but they lacked the resources to put together a fully solid product - it was always a work in progress (fix coming next version). I felt even before HP bought them (and the feeling has never gone away even in the days of HP's big product announcements etc) - that every day that went by WebOS fell further and further behind(obviously some of WebOS' key technologies took years for the competition to copy, go outside that narrow niche of cool stuff and it's pretty deserted). As much as I wanted to believe they had a chance in hell of catching up again (throw enough money at anything and you can do it) - there just wasn't (and isn't) anyone willing to commit to that level - and it makes sense too - I mean really the last major player left willing to commit to that level is Microsoft - their business is software and operating systems.
Though even before WebOS was released Palm was obviously a mess when they went through their various spin offs, splitting the company divisions up, licensing things around etc. They floundered without a workable (new) operating system for many years. Myself I did not become a customer of Palm until I puchased a Pre back in 2009. So don't look at me as some Palm die hard because I was not. I did own a few Handspring Visors a long time ago and the PalmOS compatibility layer that was available as an App on the Pre is what drove me to the Pre to begin with.
So onto a bit of RIM. I briefly used a Blackberry back in 2006-2008 - I forget the model it was a strange sort of color device, I want to say monochrome-like color(I think this was it). It was great for email. I used it for a bit of basic web browsing but that was it - didn't use it as a phone ever. I don't have personal experience supporting BIS/BES or whatever it's called but have read/heard almost universal hatred for those systems over the years. RIM obviously sat on their hands too long and the market got away from them. They tried to come up with something great with QNX and BB10 but the market has spoken - it's not great enough to stem the tide of switchers, or to bring (enough) customers back to make a difference.
Windows Phone..or is it Windows Mobile.. Pocket PC anyone? Microsoft has been in the mobile game for a really long time obviously (it annoys me that press reporters often don't realize exactly how long Microsoft has been doing mobile -- and tablets for - not that they were good products but they have been in the market). They kept re-inventing themselves and breaking backwards compatibility every time. Even after all that effort - what do they have to show for themselves? ~3.5% global market share? Isn't that about what Apple Mac has ? (maybe Mac is a bit higher).
The mobile problem is compounded further though. At least with PCs there are (and have been for a long time) standards. Things were open & compatible. You can take a computer from HP or from Dell or from some local whitebox company and they'll all be able to run pretty much the same stuff, and even have a lot of similar components.
Mobile is different though, with ARM SoCs while having a common ancestor in the ARM instruction sets really seem to be quite a bit different enough that it makes compatibility a real issue between platforms. Add on top of that the disaster of the lack of a stable Linux driver ABI which complicates things for developers even more (this is in large part why I believe I read FirefoxOS and/or Ubuntu phone run on top of Android's kernel/drivers).
All of that just means the barrier to entry is really high even at the most basic level of a handset. This obviously wasn't the case with the standardized form factor components(and software) of the PC era.
So with regards to the maturation of the market the signs are clear now - with Apple and Samsung having absolutely dominated the revenues and profits in the mobile handset space for years now - both players have shown for probably the past year to 18 months that growth is really levelling out.
With no other players showing even the slightest hint of competition against these behemoths with that levelling of growth that tells me, sadly enough that the opportunity for the most part is gone now. The market is becoming a commodity certainly faster than I thought would happen and I think many others feel the same way.
I don't believe Blackberry - or Nokia for that matter would of been very successful as Android OEMs. Certainly at least not at the scale that they were at - perhaps with drastically reduced workforces they could of gotten by with a very small market share - but they would of been a shadow of their former selves regardless. Both companies made big bets going it alone and I admire them for trying - though neither worked out in the end.
Samsung may even go out as well the likes of Xiaomi (never heard of them till last week) or perhaps Huawei or Lenovo coming in and butchering margins below where anyone can make money on the hardware front.
What really prompted this line of thinking though was re-watching the movie Pirates of Silicon Valley a couple of weeks ago following the release of that movie about Steve Jobs. I watched Pirates a long time ago but hadn't seen it since, this quote from the end of the movie really sticks with me when it comes to the whole mobile space:
Jobs, fresh from the launch of the Macintosh, is pitching a fit after realizing that Microsoft’s new Windows software utilizes his stolen interface and ideas. As Gates retreats from Jobs’ tantrum, Jobs screeches, “We have better stuff!”
Gates, turning, simply responds, “You don’t get it. That doesn’t matter.”
(the whole concepts really gives me the chills to think about, really)
Android is the Windows of the mobile generation (just look at the rash of security-related news events reported about Android..). Ironically enough the more successful Android is the more licensing revenue Microsoft gets from it.
I suppose in part I should feel happy being that it is based on top of Linux - but for some reason I am not.
I suppose I should feel happy that Microsoft is stuck at 3-4% market share despite all of the efforts of the world's largest software company. But for some reason I am not.
I don't know if it's because of Google and their data gathering stuff, or if it's because I didn't want to see any one platform dominate as much as Android (and previously IOS) was.
I suppose there is a shimmer of hope in the incorporation of the Cyanogen folks to become a more formalized alternative to the Android that comes out of Google.
All that said I do plan to buy a Samsung Galaxy Note 3 soon as mentioned before. I've severed the attachment I had to WebOS and am ready to move on.
So obviously the big news of the day is Microsoft buying Nokia's handset division for a big chunk of change. Both seem to be spinning it as a good thing, a logical next step in their partnership. For Nokia it probably is a good thing as it gives them an exit strategy from that business which hasn't been doing so hot. For Microsoft the deal is less attractive with investors obviously agreeing sending their stock down ~5% on the day.
Some folks are saying a big reason for this was perhaps Nokia's patents, which Microsoft apparently gets a ten year license to, they don't acquire them outright (I can only wonder what that would of done for their war on Android), many folks speculate that the CEO of Nokia may be the successor to Ballmer who recently announced his retirement.
I'm going to go out on a limb here as I have nothing to lose and say this is because Nokia was seriously looking at throwing in the towel on the Windows Phone platform.
I think that because there really was no reason for Microsoft to buy Nokia (YET). Nokia was doing Microsoft's bidding, taking all the risk and reaping none of the rewards. They were sacrificing themselves slowly on the sword of Microsoft, and the investors were getting upset. I fully believe(d) that they would be acquired by Microsoft but not until the viability of Nokia was called into question or perhaps if Nokia was going to give up. I suppose the optimistic point of view would be Windows Phone is about to catapult and the acquisition cost is cheap relative to where it would be in the future. I'm not an optimist like that though! Microsoft obviously has a ton of money and has a strong track record of paying a large premium for companies. So I don't think value played a key role here.
More commentary from someone on CNBC this morning asked why didn't Ballmer leave an acquisition of this magnitude to his successor(this being at least the 2nd largest in the company's history) - someone who will be driving the future of the company. Though if Ballmer seriously things this Elop fella is the one to take the reigns, I think that would probably be a mistake - with Elop's recent track record of basically burning the company to the ground to make a bet on a new platform. Microsoft has a ton of businesses, and they need to not burn them to the ground in an effort to chase after the next shiny. Elop sounds like a great leader for devices. I don't know who would make a good MS CEO. That's not an area I try to claim any level of expertise to!
So I think Nokia was at least talking seriously about a major shift in strategy internally -- perhaps just calling Microsoft's bluff - in order to get Microsoft to finally move and acquire them while their share price is where it's at now.
In the end it doesn't matter to me of course, I'm not an investor regardless, I'm not vested for or against the platform. I do admire Microsoft a bit for not giving up though. They have had some major adoption issues with their new platform forcing Nokia to make major price cuts. They've also been able to capitalize on the chaos at Blackberry and wrestle the #3 spot from them. Though globally that #3 spot as it stands today, is still a rounding error in the grand scheme of things.
I just hope for the sake of their users they don't do to Windows Phone 8 what they did to 7, and 6.x, and perhaps prior versions in basically abandoning them and making the newer versions completely incompatible. Windows on desktops has been able to sustain such a large presence in a big part due to such massive amounts of compatibility. I'm honestly still shocked I can run a game that came out in 1995 on a modern 64-bit Windows 7 system without any modifications. To even propose such an idea for the Linux platform just makes me laugh, or cry, or maybe a little bit of both.
Paraphrasing from CNBC yesterday:
OMG!! CARL ICHAN IS TWEETERING ABOUT APPLE -- NASDAQ IS DOWN -- PEOPLE CAN'T TRADE ON THIS NEWS!!
Let me preface this a bit further and say in the line of work that I am in I have been on the receiving end of so many outages of various types ... some of them really nasty lasting hours, even down for multiple days, some involving some big data losses, many had me up for 20-30+ hours straight. Some of the most fun times I've had have been during big outages. Finally, some excitement!
My favorite outage that I can recall was one at AT&T about nine years ago. They were doing a massive migration to a new platform to support number portability, among other things. So they asked us to hold transactions in our queues while they were down for ~6-8 hours (the company I was at handled most of the mobile e-commerce for them at the time). So we did. 8 hours passed.. 10... 12... 16.. still down. No ETA .. It wasn't a huge deal for us for the first day, it became somewhat troublesome by the 3rd day as these queues were in memory and we had hard limits on memory(32-bit). But the folks in the AT&T stores were really hurting as they could not provision any new phones, all new orders had to be done on paper, then input into the computer system later. I forgot how long the outage was total I think around 4 days though. I looked at the whole situation and couldn't help but laugh. Lots of laughter. 8 hours to 4 days.. thousands of orders being placed via paper, by one of, if not the largest telcos in the world.
So I think I have a better perspective on this sort of thing than those less technical folk who freak out about stuff like the NASDAQ outage yesterday.
Taking NASDAQ specifically it was pretty absurd to see the whole situation unfold yesterday (I worked from home so saw the full thing end to end on CNBC). People coming on air and saying how they were frustrated that NASDAQ wasn't giving any information as to what was going on, speculation about complications being a public company and an exchange at the same time (and disclosure requirements etc). Then a bigwig comes on, Harvey Pitt, a former SEC chairman and just seems to ream NASDAQ, saying how it's totally unacceptable that they are down, there should be heavy fines and zero tolerance.
Come on folks - get a grip. It's a stock exchange. It's not a 911 system. People aren't going to die. If your system is so fragile that it can't survive a few hours of downtime on an exchange and can't tolerate a little volatility then it's your system that needs to be fixed.
You don't have control over the exchange, or the internet peering points between you and them(or your broker and them), there's so many points of failure that you should have a more robust system, the exchanges I have no doubt are incredibly complex, convoluted and obscure things that are constantly under attack by people trying to get trades through as quickly as possible, like those folks that manipulate the market.
Even the experts seem to be moving too fast, just barely a year ago Knight Capital lost $400 million in a matter of minutes due to a software bug. They later were forced to sell the company. More recently the almighty Goldman Sachs did something similar, last I saw they were hoping they would only lose $100 million as a result of that error.
Slow down, take a break. Things are moving too fast. I see people on CNBC constantly argue that the markets are really important because so many people have 401ks, IRAs etc. But reality doesn't agree with them. I don't recall the specific stat but I've heard it tossed around a few times something along the lines of 85% of stocks are owned by 5% of the population.
Another stat I've heard tossed around is ~80% of the transactions these exchanges get today are from high frequency traders. So if HFT somehow goes away then these exchanges are in trouble revenue wise.
Those two stats alone tell me a lot about the state of the markets. I'm no financial expert obviously but I have watched CNBC a lot for many years now (going back to at least 2007) on a daily basis (RIP Mark Haines). I am often fascinated by the commentary, and the general absurdities of the market structure in general(I find in general it's more comedy than anything else). There's been very little investing going on for a very long time. Really the stock markets in general are outright gambling. Stocks rarely move on fundamentals anymore(not sure when they last did) it's all buzz, and emotions.
It's no wonder so many startups aren't interested in going IPO, and some other big established brands are wanting desperately to go private. To get away from activist investors, and the overall pressures to run your company in a pro-market fashion rather than what's best for the long term health of the company itself (and thus long term shareholders).
If your that dependent on market liquidity (e.g. the schemes folks like Lehman Brothers were doing rolling over their financials every night) - your doing something very wrong, and you deserve to get burned by it.
These exchanges are closed for upwards of 16 hours a day (and closed weekends and holidays!), there is some limited after hours trading, and some stocks trade on other exchanges as well, but the relative liquidity there is small.
This goes beyond NASDAQ of course and to outages in general. Whether it was the recent Google, or Amazon, or Microsoft, or whatever outages recently or others(I suffered through two yesterday myself that were the result of a 3rd party, one of which was literally minutes apart from when NASDAQ went down..).
So chill out. Fix the problem, don't rush or you might make a mistake and make things worse. Get it right, try not to let that particular scenario happen in the future.
It's just a website, it's just a stock exchange. It's not a nuclear reactor that is on the verge of melt down.
Breathe. The world isn't going to end because your site/business happens to be offline for a few hours.
Internet hippies at it again!
I put the original comments in italics, and the non italic stuff is the IPv6 person responding. I mean honestly I can't help but laugh.
I was a part of the internet when it started and was the wild wild west. Everyone had nearly unlimited ip addresses and NOBODY used them for several reasons. First nobody put everything on the internet.
That was then. Now is now. The billion people on Facebook, Twitter, Flickr don't put anything online? Sure, it's all crap, but it sure is not nothing.
It's just Dumb to put workstations on the internet... Sally in accounting does not need a public IP and all it does is make her computer easier to target and attack. Hiding behind that router on a separate private network is far more secure. Plus it is easier to defend a single point of entry than it is to defend a 255.255.0.0 address space from the world.
Bullsh*t. If in IPv4 your internal network would be 192.168.10.0/24, you can define an IPv6 range for that as well, e.g. 2001:db8:1234:10::/72. And then you put in your firewall:
2001:db8:1234:10::/72 Inbound: DENY ALL
Done. Hard? No. Harder than IPv4? No. Easier? Yes. Sally needs direct connection to Tom in the other branch (for file transfer, video conference, etc):
2001:db8:1234:10::5411/128 Inbound: ALLOW ALL FROM 2001:db8:1234:11::703/128
Good luck telling your IPv4 CGN ISP you need a port forwarded.
Second I have yet to have someone give me a real need for having everything on the internet with a direct address. you have zero need to have your toaster accessible from the internet.
Oh yeah? Sally might need that 30 GB Powerpoint presentation of her coworker in the other branch. Or that 100 MB customer database. Well, you know, this [xkcd.com]. How much easier would that be with a very simple app that even you could hack together that sends a file from one IP address to the other. Simple and fast, with IPv6. Try it with IPv4.
It's amazing to me how folks like this think that everything should just be directly connected to the internet. Apparently this IPv6 person hasn't heard of a file server before, or a site to site VPN. Even with direct accessibility I would want to enforce VPN between the sites, if nothing else to not have to worry that any communications would not be encrypted (or in some cases WAN optimized). Same goes for remote workers - if your at a remote location and wanting to talk to a computer on the corporate LAN or data center - get on VPN. I don't care if you have a direct route to it or not (in fact I would ensure you did not so you have no choice).
The problems this person cites have been solved for over a decade.
I'm sorry but anyone that argues that 2001:db8:1234:10::5411/128 is simpler than 192.168.10.0/24 is simpler is just ...not all there.
The solutions perhaps may not be as clean as something more native, though the thought of someone wanting to move 30GB of data over anyone's internet connection at the office would be a very bad thing to do without arranging something with IT first (do it off hours, throttle it, something).
The point is the solutions exist, and they work. Fact remains that if you go native IPv6 your going to have MUCH MORE PAIN than any of the hacks that you may have to do with IPv4 today. IPv6 fans fail to acknowledge that up front. They attack IPv4/NAT/etc and just want the world to turn the switch off of IPv4 and flip everyone over.
I have said for years I don't look forward to IPv6 myself (mainly for the numbering scheme, it sucks hard). If the time comes where I need IPv6 for myself or the organization I work for there are other means to get it (e.g. NAT - at the load balancer level in my case) that will work for years to come (until perhaps there is some sort of mission critical mass of outbound IPv6 connectivity that I need - I don't see that in the next 5-8 years - beyond that who knows maybe I won't be doing networking anymore so won't care).
I'm sure people like me are the kind of folks IPv6 people hate. I don't blame 'em I suppose.
There is nothing - absolutely nothing that bugs me about IPv4 today. Not a damn thing hinders me or the organizations I have worked for. At one point SSL virtual hosting was an issue, but even that is solved with SNI (which I just started using fairly recently actually).
The only possibility of having an issue I think is perhaps if my organization merged with another and there was some overlapping IP space. Haven't personally encountered that problem though in a very long time (9 years - and even then we just setup a bunch of 1:1 NATs I think - I wasn't the network engineer at the time so wasn't my problem).
I remember one company I worked for 13 years ago - they registered their own /24 network back in the early 90s, because the people at the time believed they had to in order to run an internal network. The IP space never got used (to my knowledge) and it was just lingering around - the contact info was out of date and we didn't have any access to it (not that we needed it, was more a funny story to tell).
When I set this server up at Hurricane Electric, one of the things they asked me was if I wanted IPv6 connectivity, since they do it natively I believe (one of the biggest IPv6 providers out there I think globally ?). I thought about it for a few seconds and declined, don't need it.
IPv6 fans need to come up with better justification for the world to switch other than "the internet is peer to peer and everyone needs a unique address" (because that reason doesn't cut it for folks like me, and given the world's glacial pace of migration I think my view is the norm rather than the exception). I've never really cared about peer to peer anything. The internet in general has been client-server and will likely remain so for some time (especially given the average gap between download and upload bandwidth on your typical broadband connection)
Given I have a server with ~3.6TB of usable space on a 100Mbps unlimited bandwidth connection less than 25 milliseconds from my home I'd trade download bandwidth for upload bandwidth in a HEARTBEAT - I'd love to be able to get something like 25/25Mbps unfortunately the best upload i can get is 5Mbps - while I can get 150Mbps down -- my current plan is more like 2Mbps up and 16Mbps down.
ANYWAY........ I had a good laugh at least.
Back to your regularly scheduled programming..
The big 2-0. Debian was the 2nd Linux I cut my teeth on, the first being Slackware 3.x. I switched to Debian 2.0 (hamm) in 1998 when it first came out. This was before apt existed (I think that was Debian 2.2 but not sure). I still remember the torture that was dselect, and much to my own horror dselect apparently still lives. Though I had to apt-get install it. It was torture because I literally spent 4-6 hours going through the packages selecting them one at a time. There may of been an easier way to do it back then I'm not sure, I was still new to the system.
I have been with Debian ever since, hard to believe it's been about 15 years since I first installed it. I have, with only one exception stuck to stable the entire time. The exception I think was in between 2.2 and 3.0, I think that delay was quite large so I spent some time on the testing distribution. Unlike my early days running Linux I no longer care about the bleeding edge. Perhaps because the bleeding edge isn't as important as it once was(to get basic functionality out of the system for example).
Debian has never failed me during a software update, or even major software upgrade. Some of the upgrades were painful (not Debian's fault - for example going from Cyrus IMAP 1.x to 2.x was really painful). I do not have any systems that have lasted long enough to traverse more than one or two major system upgrades, hardware always gets retired. But unlike some other distributions major upgrades were fully supported and worked quite well.
I intentionally avoided Red Hat in my early days specifically because it was deemed easier to use. I started with Slackware, and then Debian. I spent hours compiling things whether it was X11, KDE 0.x, QT, GTK, Gnome, GIMP.. I built my own kernels from source, even with some custom patches(haven't seriously done this since Linux 2.2). I learned a lot, I guess you could say the hard way. Which is why in part I do struggle on advising people who want to learn Linux what the best way is(books, training etc). I don't know since I did it another way, a way that takes many years. Most people don't have that kind of patience. At the time of course I really didn't realize those skills would become so valuable later in life it was more of a personal challenge for myself I suppose.
I have used a few variants/forks of Debian over the years, most recently of course being Ubuntu. I have used Ubuntu exclusively on my laptops going back several years(perhaps even to 2006 I don't remember). I have supported Ubuntu in server environments for the past roughly three years. I mainly chose Ubuntu for the laptops and desktops for the obvious reason - hardware compatibility. Debian (stable) of course tends to lag behind hardware support. Though these days I'm still happy running Ubuntu 10.04 LTS desktop .. which is EOL now. Haven't decided what my next move is, not really thinking about it since what I have works fine still. Probably think more whenever I get my next hardware refresh.
I also briefly used Corel Linux, of which I still have the inflatable Corel penguin sitting on my desk at work it has followed me to every job for the past 13 years, still keeps it's air. I don't know why I have kept it for so long. Corel Linux was interesting in that they ported some of their own windows apps over to Linux with Wine, their office suite and some graphics programs. They made a custom KDE file manager if I recall right(with built in CIFS/SMB support if I recall right). Other than that it wasn't much to write home about. Like most things on Linux the desktop apps were very fragile, obviously closed source and so did not last long(compatibility wise could not run them on other systems) after Corel Linux folded. My early Debian systems that I used as desktops at least got butchered by me installing custom stuff on top of them. Linux works best when you stick with the OS packages, and that's something I did not do in the early days. These days I go to semi extreme lengths to make sure everything (within my abilities) is packaged in a Debian package before installation.
I used to participate a lot in the debian-user mailing list eons ago, though haven't since due to lack of time. At the time at least that list had massive volume, it was just insane the amount of email I got from it. Looking now, comparing August 2013 roughly 1,300 messages, vs August 2001 almost 6,000! Even more so the spam I got long after I unsubscribed. It persisted for years until I terminated the email address associated with that list. I credit one job offer a bit over ten years ago now to my participation on that(and other) mailing lists at the time, as I specifically called them out in my references.
That being said, despite my devotion to Debian on my home systems (servers at least, this blog runs on Debian 7), I still do prefer Red Hat for commercial/larger scale stuff. Even with the past three years supporting Ubuntu the experience has been ok, I still like RH more. At the same time I do not like RH for my own personal use. It basically comes down to how the system is managed. I was going to go into reasons why I like RH more for this or that, but decided not to since it is off topic for this post.
I've never seen Toy Story - the movie characters Debian has used to name it's releases after since at least 2.0 perhaps longer. Not really my kind of flick, have no intention of ever seeing it really.
Here's a really old screen shot from my system back in the day. I don't remember if this is Slackware or Debian, the kernel being compiled 2.1.121 came out in September 1998, so right about the time I made the switch. Looks like I am compiling Gimp 1.01, some version of XFree86, and downloading a KDE snapshot (I think all of that was pre 1.0 KDE). And look, xfishtank in the background! I miss that. These days Gnome and KDE take over the root window making things like xfishtank not visible when using them (last I tried at least). xpenguins is another cool one that does still work with GNOME.
So, happy 20th birthday Debian, it has been interesting to watch you grow up, and it's nice to see your still going strong.
The only thing technical related to this is the fact that Yahoo! yanked the guy's site. I suppose I can understand why, but I am very glad that the site(at least for the moment) lives on at a mirror.
I felt this guy took the time to think and write about his thoughts and did the world a favor in showing his state of mind. So the least I could do is read it - and perhaps comment on it a bit (with whatever respect I can give).
From what I can tell he committed suicide because he felt his mind was going, he was no longer (as) productive to society as he wanted to be, and he had a very negative outlook on near term civilization as we know it.
He took his life two days ago, on his 60th birthday in a police parking lot with a self inflicted gunshot wound to the head.
I have no idea who Martin Manley was but it was very interesting to see his line of thinking.
Some good quotes
I began seeing the problems that come with aging some time ago. I was sick of leaving the garage door open overnight. I was sick of forgetting to zip up when I put on my pants. I was sick of forgetting the names of my best friends. I was sick of going downstairs and having no idea why. I was sick of watching a movie, going to my account on IMDB to type up a review and realizing I've already seen it and, worse, already written a review! I was sick of having to dig through the trash to find an envelope that was sent to me so I could remember my own address - especially since I lived in the same place for the last nine years!
I didn’t want to die alone. I didn’t want to die of old age. I didn’t want to die after years of unproductivity. I didn’t want to die having my chin and my butt wiped by someone who might forget which cloth they used for which. I didn’t want to die of a stroke or cancer or heart attack or Alzheimer’s. I decided I was gettin’ out while the gettin’ was good and while I could still produce this website!
He does mention a life insurance policy that expires next year, and he wouldn't of been able to afford to renew it. So that money can go to the folks he cares about. Though I thought most, if not all such policies excluded suicide. I am not sure though, never looked into it. He seems intelligent enough that he would of known the details of the policy he had.
I felt pretty good about being prepared for economic collapse – the primary reason being all the gold and silver I owned. But, then one day I realized that all the gold and silver and guns and ammo and dried food and toilet paper in the world wouldn’t prevent me from seeing the calamity with my own eyes - either ignoring other's plight or succumbing to it. And, that’s something I decided I simply was not willing to live through.
Right with him there, except for the fact about being prepared. I acknowledged a long time ago that there's no point in trying to prepare for such an event, the resources required would be pretty enormous. My best friend(who is reading this, HI!!!) has told me on a couple of occasions to go live with him in a cabin in the woods, live off the land(in the event of total collapse).. Not feasible for various reasons I don't want to get into here.
But, if you plan to stick around, then you better plan to watch an economic collapse that will be worse than anything you can imagine.
It's frustrating to me to see all of our leaders, whether it is in corporations or government show such, I'm not sure what the words are other than to call it something like false confidence. Hiding the truth because sentiment is such an important factor of the economy. It's everywhere, the more I see folks talk the more I see in most cases they really have no idea what they are doing, they are just hoping it works out.
The more I learn the more I realize how young of a civilization we really are and how little we actually know.
What pisses me off more than anything is the system in place that tries to educate us so we think we know. So we have faith in those that are making the decisions.
I'm the first one to admit I don't know what the answers are(macro global economic/political type things) -- but I'm also (one of)the first ones to admit that uncertainty in the first place, which would probably make me a bad leader. I can't portray confidence because I don't have confidence(in that stuff anyway, I believe I do portray confidence when it comes to the tech things I work with), I do have honesty, which is what this Martin fellow seems to have a lot of as well. Most people don't want the truth, they just want to feel good.
One of the only videos I ever uploaded to Youtube was this, which is a good illustration from the corporate side of things. There is another bit (haven't been able to find it last I checked) which showed the same sort of thing from our previous president where they walked through his various descriptions of the impending economic downfall. From storm clouds to whatever it was in the end.
I'm in complete agreement with this Martin guy though, what we experienced in 2008-2010 or whatever is nothing compared to what is coming. When that is exactly I'm not sure, it seems folks like the Fed etc seem to be able to pull rabbits out of their hats to drove the ponzi scheme just a bit further. My general expectation is within the next two decades, and I think that is probably a conservative estimate.
It is unfortunate that the topic of suicide is amongst those topics that are considered taboo. People don't talk about it. The common theme is often mention the word and your deemed crazy and they want to lock you up in a padded room, fill you with meds until you conform, or die in the process.
There was a news report on NBC that I saw last year about a facility(hospital) where they assist people and their families to prepare for when that day will come. People perhaps like Martin who don't want to live out their lives as a burden to others, unable to mentally and/or physically perform things. I thought it was pretty amazing to see. They go through tons of questions with the patient about scenarios and what do do with those scenarios. So when the time comes there is no doubt. There won't be some random family member saying KEEP THEM ALIVE I DON'T CARE IF IT COSTS A MILLION $.
As with many topics, there was (in retrospect, it really didn't hit me until a few years ago) an excellent Star Trek: The Next Generation episode on this very topic. It was called Half a Life from Season 4 (1991). At the time when I saw it, I suppose you could say I didn't understand it, as I found the episode quite boring (well the special effects in the beginning were pretty neat). But as the health care debate picked up a few years ago I realized that episode told an interesting story that I never bothered to realize until that time.
Martin lists reasons he considered not to commit suicide (paraphrasing(?) them briefly, see the site for more details):
- Loved ones - obvious, people will miss you. More importantly perhaps is if you are in a situation where you are supporting someone else and they are dependent upon you. Martin was not in this situation
- Want to see the future. Live out retirement, travel the world perhaps, read books, play backgammon, look at granny porn (my grandfather did this a lot in the years before he died, honestly I did not know such porn existed until my sister+mother told me)
- People want to accomplish as much as they can in their lives and they don’t want to run out of time before they do it. Of course, for people who think that way, they never fulfill all those accomplishments anyway and they never will. So, the only thing to do is keep chasing them until you die.
- The last major reason I thought of for why people want to live indefinitely is the whole notion of leaving a legacy.
I once had a quasi bucket list when I was about 22 – things to accomplish by the time I was 30. When 30 came around and I hadn’t accomplish them, I decided the bucket list idea was stupid.
There will always be reasons to want to stay alive another year or five years or 10 years. It wouldn’t have mattered how long I lived, there would have been hundreds or thousands of itches to scratch!
I could take pride in the fact that I wasn’t going to be sucking on the nipple of the federal debt by taking social security and medicare. When the US economy collapses, it won't have been me that contributed to taking it down.
Here he touches on life insurance again, I guess they do pay out for suicide -
Another reason why age 60 is ideal is that my life insurance expires next year and I would not be able to afford to get new insurance without paying a ton. And, it requires two years of waiting - once you get insurance - before you can commit suicide and still have the beneficiaries receive the death benefit.
Holy sh*t, he even brings up the aforementioned Half a Life episode of TNG. He devotes several paragraphs to it!
Then he goes into how he did it - a gun
One of the problems with shooting oneself is the obvious mess. I thought about that a lot. I didn’t want anyone I knew discovering my body and I didn’t want to make a mess in the house – something my sister or my landlord would have to deal with. No way.
I finally decided the best way to do it would be at 5AM on August 15, 2013 at the far southeast end of the parking lot at the Overland Park Police Station. If everything worked out right – and I’m sure it did, I called 911 at 5AM. I told them “I want to report a suicide at the south end of the parking lot of the Overland Park Police Station at 123rd and Metcalf. Bang.”
He left a note on himself which in part read
“I committed suicide of my own free will. I am not under the influence of any drugs. I am sorry for your inconvenience! You will be contacted within a matter of hours by my sister. She will find out about this by an overnight letter and/or email I sent to her which she will get this morning. In it, I explain the exact location where I shot myself and gave her your phone number. At that time, she will tell you who I am. If you discover who I am prior to her call, please do not contact her. I do not want her (or anyone else I sent letters to overnight) to find out about it from you. I want them to find out about it from me. Thank you!
The act of suicide can be horrible for those left behind. I couldn’t control the fact of the matter, but I could control the circumstances. I believe the way I did it, coupled with the overnight letters/emails and this web site, is the best I can do to mitigate the hurt.
He wrote a bunch more but the most interesting stuff I suspect was in the first few pages that I read (up until the "Gun Control" topic which is only semi related).
It is fascinating to me to see the level of thought and analysis that went into his decision. Whether you agree with the decision or not is up to you, but to me I think the point is it was his decision. Call him greedy, or selfish or crazy or whatever, but I firmly believe that having the "freedom" (though I think it is illegal(hence the quotes), which is crazy) to make this decision and follow through on it is an important right to have.
Now if you cause major harm as a result of your action (say perhaps drive down the wrong direction on a freeway) then that's a different scenario. Martin was incredibly thoughtful in how he handled the whole situation, and for that I ..well I don't really know how to put that into words.
If you are someone who would want to rob someone else of this right then I'd say it is you who are greedy, selfish or crazy. The only exception would be again if you were somehow economically dependent upon them.
While I understand Yahoo! decision to yank the site, as it probably was against their TOS - I only wish Martin would of chosen a better place to host the site! Fortunately there is a mirror, I'll probably snag a copy of it myself just in case.
I don't consider myself a very emotional person, reading his writing really did not evoke any emotional response. I went to my first(thus far, only) funeral almost two years ago now for a cousin of mine who also committed suicide(also via self inflicted gunshot). I hadn't seen him since the late 80s, I had no emotional response for that either. I did feel bad for not "feeling" bad though, as strange as that may sound to write. I sat through the funeral as his loved ones and friends told stories about him and stuff for a few hours. It was an interesting experience. I'm sorry he is gone but from what I knew of the broader situations the family had experienced over the past few decades I can totally understand his decision. Anyway that is sort of off topic.
I hope the folks that where closest to Martin understand, accept, and most importantly support his decision.
I suppose the obvious thing to say here is may he rest in peace.
Any further discussion on the topic I am more than happy to talk about off line.