Based on an estimate that HTC has shipped 30 million Android devices, Asymco calculates that Microsoft has seen $150 million in revenue from Android. With Microsoft selling 2 million Windows Phones licenses, its Windows Phone revenue comes in at $30 million.
Microsoft making more money off of Android than it is on it's own cutting edge mobile platform..
Not tech related, but a sad day for me, a great guy Mark Haines who was an anchor on CNBC for decades died recently at the age of 65. I have been watching his show for at least the past 5 years now and he really was my favorite guy, always honest, never afraid to confront someone on a topic, and never afraid to speak his own mind. He was with CNBC since the day they launched in 1989.
Mark Haines correctly called the top of the Nasdaq in 2000, also correctly called the bottom of the markets (known as Haines' bottom at the time) in 2008. With that in mind he called the recent tech IPOs (especially LinkedIn) a bubble (regardless of whether of whether or not the environment is different from the dot com days).
This sums him up pretty good in my eyes, from the words of Bob Pisani - "How do I feel about you as a person, do you make sense to me, does your argument make sense to me, if it doesn't make sense to me I'm not going to have that much respect for you - I don't care what your title is, I don't care what your position is. I don't care if your a famous economist, I don't care if your a world leader or not. If it makes sense to me, and I think you have a point to make, I'm going to give you the time and respect your opinion - if it doesn't - I'm going to come back at you".
There's a tribute show for him today at 4PM PDT on CNBC.
He was an awesome person, and will be greatly missed by me.
Just a quick post, came across this on the MySQL Performance blog and thought it was a really well written paper. Talks about vertical scaling in the most current versions of MySQL, what the major bottlenecks are when scaling with more CPU cores, and how to extract the highest amount of I/O out of today's modern server hardware.
What I'd like to see just for comparison purposes is running the latest & greatest MySQL, vertically scale it to 48 cores, and compare it against Oracle Standard Edition on the same 48 cores. As far as I know the Oracle license agreement forbid publishing performance numbers so I'll probably never see this but it is a curiosity of mine, because sharding a database can make application development significantly more complex.
It is nice though that the latest versions of MySQL can scale beyond four cores.
The best word I can come up with when I saw this was
What I'm talking about is the announcement of the Black Diamond X-Series from my favorite switching company Extreme Networks. I have been hearing a lot about other switching companies coming out with new next gen 10 GbE and 40GbE switches, more than one using Broadcom chips (which Extreme uses as well), so have been patiently awaiting their announcements.
I don't have a lot to say so I'll let the specs do the talking
- 14.5 U
- 20 Tbps switching fabric (up ~4x from previous models)
- 1.2 Tbps fabric per line slot (up ~10x from previous models)
- 2,304 line rate 10GbE ports per rack (5 watts per port) (768 line rate per chassis)
- 576 line rate 40GbE ports per rack (192 line rate per chassis)
- Built in support to switch up to 128,000 virtual machines using their VEPA/ Direct Attach system
This was fascinating to me:
Ultra high scalability is enabled by an industry-leading fabric design with an orthogonal direct mating system between I/O modules and fabric modules, which eliminates the performance bottleneck of pure backplane or midplane designs.
I was expecting their next gen platform to be a mid plane design (like that of the Black Diamond 20808), their previous 10GbE high density Enterprise switch Black Diamond 8800, by contrast was a backplane design (originally released about six years ago). The physical resemblance to the Arista networks chassis switches is remarkable. I would like to see how this direct mating system looks in a diagram of some kind to get a better idea on what this new design is.
To put that port density in to some perspective, their older system (Black Diamond 8800), by comparison, has an option to use Mini RJ21 adapters to achieve 768 1GbE ports in a chassis (14U), so an extra inch of space gets you the same number of ports running at 10 times the speed, and line rate (the 768x1GbE is not quite to line rate but still damn fast). It's the only way to fit so many copper ports in such a small space.
It seems they have phased out the Black Diamond 10808 (I deployed a pair of these several years ago first released 2003), the Black Diamond 12804C (first released about 2007), the Black Diamond 12804R (also released around 2007) and the Black Diamond 20808 (this one is kind of surprising given how recent it was though didn't have anything approaching this level of performance of course, I think it was released in around 2009). They also finally seemed to drop the really ancient Alpine series (10+ year old technology) as well.
Also they seem to have announced a new high density stackable 10GbE switch the Summit X670, the successor to the X650 which was already an outstanding product offering several features that until recently nobody else in the market was providing.
- 1.28 Tbps switching fabric (roughly double that of the X650)
- 48 x 10Gbps line rate standard (64 x 10Gbps max)
- 4 x 40Gbps line rate (or 16 x 10Gbps)
- Long distance stacking support (up to 40 kilometers)
The X670 from purely a port configuration standpoint looks similar to some of other recently announced products from other companies, like Arista and Force10, both of whom are using the Broadcom Trident+ chipset, I assume Extreme is using the same. These days given so many manufacturers are using the same type of hardware you have to differentiate yourself in the software, which is really what drives me to Extreme more than anything else, their Linux-based easy-to-use Extremeware XOS operating system.
Neither of these products appear to be shipping, not sure when they might ship, maybe sometime in Q3 or something.
40GbE has taken longer than I expected to finalize, they were one of the first to demonstrate 40GbE at Interop Las Vegas last year, but the parts have yet to ship (or if they have the web site is not updated).
For the most part, the number of companies that are able to drive even 10% of the performance of these new lines of networking products is really tiny. But the peace of mind that comes with everything being line rate, really is worth something !
x86 or ASIC? I'm sure performance boosts like the ones offered here pretty much guarantees that x86 (or any general purpose CPU for that matter) will not be driving high speed networking for a very long time to come.
Myself I am not yet sold on this emerging trend in the networking industry that is trying to drive everything to be massive layer 2 domains. I still love me some ESRP! I think part of it has to do with selling the public on getting rid of STP. I haven't used STP in 7+ years so not using any form of STP is nothing new for me!
Came across an article from a friend that talks about how Sony thinks they were compromised.
According to Spafford, security experts monitoring open Internet forums learned months ago that Sony was using outdated versions of the Apache Web server software, which "was unpatched and had no firewall installed."
The firewall part is what gets me. Assuming of course this web server(s) were meant to be public, no firewall is going to protect you against this sort of thing since of course firewalls protecting public web servers have holes opened explicitly for the web server so all traffic is passed right through.
Then there are people out there spouting stuff on PCI saying the automated external scans should of detected they were running outdated versions of software. In my experience such scans are really not worth much with Linux, primarily because they have no way to take into account patches that are back ported to the operating system. I've had a few arguments with security scanners trying to explain how a system is patched because the fix was back ported but them not being able to comprehend that because the major/minor version being reported by the server has not changed.
Then there was the company I worked for who had a web app that returned a HTTP/200 for pretty much everything, including things like 404s. This tripped every single alarm the scanners had, and they went nuts. And once again we had to explain that those windows exploits aren't going to work against our Apache Tomcat systems running Linux.
IDS and IPS are overrated as well, unless you really have the staff to watch and manage it full time. In all of the years I have worked at companies that deployed some sort of IDS (never IPS), I have seen it work, one time, back in I want to say 2002, I saw a dramatic upsurge in some type of traffic on our Snort IDS at the time from one particular host and turns out it had a virus on it. I worked at one company that was compromised at LEAST twice while I was there(on systems that weren't being properly managed). and of course the IDS never detected a thing. Then that company deployed(after I left) a higher end hardware-based IPS, and when they put it inline to the network (in passive, not enforcing mode) for some reason the IPS started dropping all SSL traffic for no reason.
They aren't completely useless though, they can help detect and sometimes protect against the more obvious types of attacks (SQL injection etc). But in the grand scheme of things, especially when dealing with customized applications (not off the shelf like Exchange, Oracle or whatever), IDS/IPS and even firewalls provide only a tiny layer of additional security on top of good application design, good deployment practices(e.g. don't run as root, disable or remove subsystems that are not used, such as the management app in Tomcat, use encryption where possible), and a good authentication system for system level access (e.g. ssh keys). With regards to web applications, a good load balancer is more than adequate to protect the vast majority of applications out there, it is "firewall like" as in it only passes certain ports to the back end systems, but (for higher traffic sites this is important) vastly outperforms firewalls, which can be a massive bottleneck for front end systems.
With regards to the company that was compromised at least twice, the intrusion was minor and limited to a single system, the compromise occurred because the engineer who installed the system put it outside of the load balancers, it was a FTP server, or was it a monitoring server, I forgot. Because it needed to be accessed externally the engineer thought hey let's just put it on the internet. Well it sat there for a good year or two, (never being patched in the meantime) before I joined the company, compromised in some fashion, and ssh was replaced with a trojaned copy (it was pretty obvious, I am assuming it was some sort of worm exploiting ssh). It had all sorts of services running on it. I removed the trojan'd ssh, asked the engineer if he thought there might be an issue, he said he didn't believe so. So I left it, until a few weeks later that trojan'd ssh came back. And at that point I shut the ethernet interfaces on the box off until it could be retired. There was no technical reason that it could not run behind the load balancer.
If you really need a front end firewall, consider a load balancer that has such functionality built in, because at least you have the ability to decrypt incoming SSL traffic and examine it, something very few firewall or IDS/IPS systems can do (another approach some people use is to decrypt at the load balancer than mirror the decrypted traffic to the IDS/IPS, but that is less secure of course).
It really does kind of scare me though that people seem to blindly associate a firewall with security, especially when it's a web server that is running. Now if those web servers were running RPC services and were hacked that way, a firewall very likely could of helped.
One company I worked at, my boss insisted we have firewalls in front of our load balancers, I couldn't convince him otherwise, so we deployed them. And they worked fine(for the most part). But the configuration wasn't really useful at all, basically we had a hole open in the firewall that pointed to the load balancer, which then pointed to the back end systems. So the firewall wasn't protecting anything that the load balancer wasn't doing already, a needless layer of complexity that didn't benefit anyone.
Myself I'm not convinced they were compromised via an Apache web server exploit, maybe they were compromised via an application running on top of Apache, but these days it's really rare to break into any web server directly via the web server software(whether it's Apache, IIS or whatever). I suspect they still don't really know how they were compromised and some manager at Sony pointed to that outdated software as the cause just so they could complete their internal processes on root cause and move on. Find something to tell congress, anything that sounds reasonable!!
I was out of town for most of last week so didn't happen to catch this bit of news that came out.
It seems shortly after Facebook released their server/data center designs Microsoft has done the same.
I have to admit when I first heard of the Facebook design I was interested, but once I saw the design I felt let down, I mean is that the best they could come up with? It seems there are market based solutions that are vastly superior to what Facebook designed themselves. Facebook did good by releasing in depth technical information but the reality is only a tiny number of organizations would ever think about attempting to replicate this kind of setup. So it's more for the press/geek factor than being something practical.
I attended a Datacenter Dynamics conference about a year ago, where the most interesting thing that I saw there was a talk by a Microsoft guy who spoke about their data center designs, and focused a lot on their new(ish) "IT PAC". I was really blown away. Not much Microsoft does has blown me away but consider me blown away by this. It was (and still is) by far the most innovative data center design I have ever seen myself at least. Assuming it works of course, at the time the guy said there was still some kinks they were working out, and it wasn't on a wide scale deployment at all at that point. I've heard on the grape vine that Microsoft has been deploying them here and there in a couple facilities in the Seattle area. No idea how many though.
Anyways, back to the Microsoft server design, I commented last year on the concept of using rack level batteries and DC power distribution as another approach to server power requirements, rather than the approach that Google and some others have taken which involve server-based UPSs and server based power supplies (which seem much less efficient).
Add to that rack-based cooling(or in Microsoft's case - container based cooling), ala SGI CloudRack C2/X2, and Microsoft's extremely innovative IT PAC containers, and you got yourself a really bad ass data center. Microsoft seems to borrow heavily from the CloudRack design, enhancing it even further. The biggest update would be the power system with the rack level UPS and 480V distribution. I don't know of any commercial co-location data centers that offer 480V to the cabinets, but when your building your own facilities you can go to the ends of the earth to improve efficiency.
Microsoft's design permits up to 96 dual socket servers(2 per rack unit) each with 8 memory slots in a single 57U rack (the super tall rack is due to the height of the container). This compares to the CloudRack C2 which fits 76 dual socket servers in a 42U rack (38U of it used for servers).
My only question on Microsoft's design is their mention of "top of rack switches". I've never been a fan of top of rack switches myself. I always have preferred to have switches in the middle of the rack, better for cable management (half of the cables go up, the other half go down). Especially when we are talking about 96 servers in one rack. Maybe it's just a term they are using to describe what kind of switches, though there is a diagram which shows the switches positioned at the top of the rack.
I am also curious on their power usage, which they say they aim to have 40-60 watts/server, which seems impossibly low for a dual socket system, so they likely have done some work to figure out optimal performance based on system load and probably never have the systems run at anywhere near peak capacity.
Having 96 servers consume only 16kW of power is incredibly impressive though.
I have to give mad, mad, absolutely insanely mad props to Microsoft. Something I've never done before.
Facebook - 180 servers in 7 racks (6 server racks + 1 UPS rack)
Microsoft - 630 servers in 7 racks
Density is critical to any large scale deployment, there are limits to how dense you can practically go before the costs are too high to justify it. Microsoft has gone about as far as is achievable given current technology to accomplish this.
Here is another link where Microsoft provides a couple of interesting PDFs, the first one I believe is written by the same guy that gave the Microsoft briefing at the conference I was at last year.
(As a side note I have removed Scott from the blog since he doesn't have time to contribute any more)