Tag Archives: standards

The all enveloping genre that is known already as the Internet of Things is fast becoming labelled as being the fastest threats to emerge on our communal radar. Presenting a disorganised chaos of hard to own and manage assets delivering potentially the biggest and most arduous to secure threat fabrics ever seen in information technology.

Come with me on a trip as we turn the hands of time back to 1934  and to visualise a classification system still in use today, based on decisions made over eighty years ago when technology and communications were in their infancy. Lets explore why to not do something in the near future will destroy the promise of IoT before it has a chance to get started.

Attacks such as those seen on OVH, Brian Krebbs and now DynDns are the very start, without standardisation, without collaboration and concerted effort we may be judged in years to come as squandering opportunities to deliver flexible safe computing for the next generation.

In the 1920s and 1930s communications firms, telegraph companies and emerging radio broadcasters globally were all finding their communal feet. Existing legislation on commerce dating back to the late 1880s in the form of the ICC (The Interstate Commerce Commission) was originally conceived to manage and to counsel and prevent the ambitions of companies across America seeking to monopolise the pace of railtrack and to provide regulations on their running and operation.

The ICC by the 1930s did not scale to take into account the technological revolution that even in its embryonic state showed the promise of making the world smaller overnight. The Post Office with its Telegraph Rights, the ICC, and the Federal Radio Commission all wrapped up and became one with what would be known as the FCC. So in 1934 The Federal Communications Commission was born. North America all of a sudden had an all encompassing standards body, which would blaze a trail for countries all over the world to emulate.

If you turn over any piece of computer equipment in the US and for export (so globally) you will see an FCC rating, the FCC even today being the standards body responsible for standards implementation within its public safety and enforcement role, eighty two years after it’s inception.

Since then we’ve seen other standards bodies, CE - from the European Commission signifying their “New Approach” methodology for device manufacture and testing, UL - Underwriters Laboratories, the EMC standards for electromagnetic devices or EN standards around electrical manufacture and quality assurance. Other countries such as Australia and Canada have other systems too. However, the FCC has stood the test of time for everything from radio broadcast and spectrum management, emergence of telecommunications standards. However when it comes to the Internet of Things it’s legs fall off.

The majority of devices today, that make up the provisioned devices that are thought of as making up the predominance of the IoT global estate are often cheaply manufactured. They're mass produced dime a dozen fast to market appliances. Many mimicking products that are more expensive and playing catchup with cloned or copied devices. Many use commodity small footprint OS that may once have found their basis in Linux and Open Source communities but have been sufficiently bastardised and forked into an unsupportable image to satisfy the commodity hardware or storage footprint allowed for device operation.

Many of these devices are not connected to always on networks and have no methodology to autoupdate (where the supplier provides that functional capability). Many ship with default passwords that users then fail to change. Many that are connected to IPv4 networks have never had updates or the updates don’t address the serious underlying security weaknesses and the versions of SSH or promiscuous daemons or services running on them. These core issues as we have seen over the last month first with the attack on the Akamai hosted site where Brian Krebbs was taken offline, then round two hitting OVH, Freenode and others and then last Friday where DynDNS was all but taken offline in the latest iteration of this now almost predictable rush to launch DDoS, then creating a huge internet real estate impact.

And nowhere is there an applicable standards body or international body that polices devices prior to shipping or at the point of manufacture to ensure device hardening. Failure to prove off security concepts and issues with the developer / manufacturer are ever present. If this was concerning the manufacure of vehicle airbags or making a childseat the issue would be entirely addressed.

What we have are companies rushing to get devices to market for consumer consumption, be that Google with their acquired Nest and Home products, Amazon with Echo are all probably at the higher end of the marketplace appreciating their users are often not that conversant with securing anything they buy. Certainly devices from manufacters such as Eurotech, Evrythng and others embrace the use of PKI and TLS and have got the authentication right at point of design and provision. We aren’t addressing those but the plethora of companies from China to Malaysia, from Hungary to Thailand which are shipping devices with dated aged insecure OS platforms. These are almost always given less thought than the packaging that they ship in or the plastics used in manufacture. Aged versions of Busybox, twelve year old SSH vulnerabilities and services turned on that should never be listening.

The IPSO Alliance, The Industrial Internet Consortium, FiWare, Open Daylight IoDM, Hypercat, The All Seen Alliance, I could go on but I won’t, the list of “standards bodies” and consortia grows almost by the month. The one thing that is for sure is that they have one thing in common, none of them are relevant and making any impact on the root problem.

For this to be impacting it has to be done at the import/export level and we have to have government and industry backed assistance to make it happen. Turning the clock back to 1934 and working with the relevant governments and agencies will be the only way of enforcing change on commodity hardware vendors who are about the units shipped not the units hacked.

We're blessed at Red Hat to have had the intelligence to capture Jason Brooks from eWeek. Jason is a stalwart of technology, a bedrock of intelligently and astutely written technology critique since 1999 so he's seen technology grow and change our abilities and stretch our ambitions over the last decade or more. You've all probably read articles he's written or discussions he's kick started without even knowing it.

I've wanted to do a podcast with him for an age and last week before I disappeared off to remote areas of the world where technology simply hasn't delivered stable internet I recorded this. Apologies for some of the dropouts on it - there were gremlins at work at Google as we recorded. It's still perfectly audible.

Enjoy - it's insightful and might be beneficial to you.

Download the podcast here in MP3 and OGG formats

I was stood at GigaOM in Holland last week and got involved in a heated discussion over ITIL as a standard in cloud. Tried to point out ITIL is a framework not a standard. Took mental notes while I was there and the result is this short podcast where I can rant and let off steam. The power of the microphone is sometimes awesome and it let's me educate as well as try to show my enthusiasm for what we're doing here in Cloud.

Also I take time out to talk about the London Developer Day we're hosting at London South Bank University on the 1st November (thats next week !!) so if you haven't registered you need to do so asap right now.

Download the podcast here in MP3 and OGG formats

2 Comments

For those of you who've known me or my work for the last decade or more you'll appreciate that one of my main call to arms is security and in particular enforcement of security enforcing technologies at the gateway and application level, my little hobby (developing publishing and supporting a firewall technology which with variants based on the code) reached millions of homes, offices and enterprises across the globe and allowed me to make a career out of security.

So it's often a question I get asked at conferences and when speaking about security in Cloud and security enforcement and responsibility in the Cloud and virtualisation arena. Fortunately at Red Hat we take security incredibly seriously and have contributed technologies such as SELinux and sVirt into our architectures and supported versions of our releases, as well as employing the mainstays in the SELinux world on our payroll to ensure that we have continuity and those folk are rewarded for their efforts.

However, to put it bluntly most architects and network  guys turn SELinux off when building out platforms and virtualised instances which is quite short sighted. When I do pose the question why a lot of responses are aligned to the fact that SELinux can sometimes due to configuration issues and past experiences where stuff broke and was hard to diagnose so easier to just turn off.

Let's be blunt, it's there to help you, it's a free secure template based technology so turning it off if you haven't got a full toolkit of other security hardening in your build schema or your platform is at best shortsighted. Did I say it was free ? In this current credit crunch culture can you justify not looking at using it ?

If you're concerned or you struggle then enable it in permissive mode in the first instance making sure you make relevant mods to /etc/sysconfig/selinux to make it persistent on reboot. Simple boolean logic is the best way (and easiest way) to start experimenting with the functionality you want to add. Then if you want to know more then search for the audit2allow function and remember if you're concerned with restrictive AVC denials breaking stuff then a quick search through auditd in /var/log/audit/audit.log then aureport is your friend. There are loads of howto's available or if you're thinking about large scale SELinux use in anger Red Hat even have a course to upgrade your RHCE to give you a complete comfort blanket in your own capabilities. It's part of the assurance and certification mode we bring to the whole Linux piece. Belt and braces if you will.

Now this article really isn't a security masterclass or SELinux howto, I'm actually more interested in getting to grips with culture change and trying to pass on my thoughts of how we need to get traction in influencing how protecting your assets, your data and your reputation in Cloud can take shape.

Over the last three years I've been using what I would describe as an almost military approach to building out legacy platforms be they physical or virtual. In days of old people might remember Jay Beale and his Bastille Linux hardening script, which was a great starting point when building simple Linux stacks. I remember vividly when he posted it to newsgroups and Slashdot picked up on it. It represented for the first time really in the Linux Open Source community someone who took a simple exercise but made it mainstream towards security as a standard rather than a retrofit. It enabled many of us to not only run it but get under the hood to find out "how" it worked. What is it they say "a little bit of knowledge is a dangerous thing ?".

So as we move into provisioning our Cloud environments across one or multiple hypervisor types, or moving applications into hybrid or public Cloud having that "accreditation" process or controls breakdown is invaluable. Mine runs over about 5 tabs of a spreadsheet and would make most auditor feel out of a job. However maybe my way of having a moving spreadsheet of controls that I've built up over time for all the certifications / governances that I've had to deploy to (including in NATO battlefield accredited above classified environments) probably is going a bit far for standard run of the mill server environments.

So its fortunate that my friends and fellow members of the Cloud Security Alliance started many moons ago to put together an authoritative set of controls to allow you to get to work now building out your platforms or engaging with a Cloud provider regardless of the tenacity or the aggressive nature of your certification or audit model. The controls are designed to get you out the blocks building Cloud platformst that need to meet the regulations around ISO 27001/27002, ISACA COBIT, PCI, NIST, Jericho Forum and NERC CIP. Let's not mention SAS 70. I still, do not, and believe me I've tried, understand why an accounting standard has ANY place in Cloud service provision. CCM will help you there and you can also take a look at the CSA STAR programme while you're there.

I've mentioned the Cloud Security Alliance before here numerous times (lets call them the CSA from now on). The CSA are one of the most critical building blocks of the Cloud community and Jim Reavis and the steering members of the CSA have made the education and communication of security best practices to the community their ethos and commitment since they were founded. Red Hat support the CSA and if you've heard me talk you'll hear me mention them proudly on a regular basis. I am continually mentioning them.

Shortly I am recording an often re-arranged podcast with Jim Reavis of the CSA and we'll get that out to to you as fast as I can mix it in the coming days and weeks.

Whether you're playing with Cloud in your dev/test sandpit or migrating to a hybrid  cloud understanding what part reputation protection of your app dev environment and your underlying transportation of data is critical. Reputations are lost in minutes as are share prices when a company is seen as damaged by data loss. Simple breaches of major household name organisations are often met with lax fines and investigation by sovereign territory governments and information commissioners, however the risk factors involved are enormous. At the back end of the application architecture - in the trenches - are the technical guys who have to turn the dreams and aspirations of sales people and marketing types into the portals and customer facing Cloud hosted environments that will generate the revenue. If we arm you to do your job better and to do it in a way that allows generic controlled growth of your platforms and your Cloud aspirations then thats a good thing right ?

Do visit the CCM matrixes today and learn how they help you go to work in ways that will make your auditor despair. It's kinda cool actually because auditing Cloud and typically follow the sun type datacentre clouds has always been a dark art. By following this article and my advice you can actually have a retort to this argument. Cut a huge percentage out your auditors workload (and their resulting invoice) by owning the moral upper ground and in the process maybe think about turning SELinux back on. Blended use of SELinux, sVirt, supported certifed Red Hat subscriptions and technology such as CloudForms gives you everything you need from an IaaS perspective today to go to work. If PaaS security is your thing then listen out soon to another podcast I'm going to record with Tim Kramer of the OpenShift team (in fact if you haven't already read it go visit Tim's great security post here).

Also I'm promised a security podcast with Mark Cox at some point in the coming month so if security is your thing you're going to be kept busy listening to me warble down your earbuds about everything related to CloudSec. If you think that more people could benefit from a primer in Cloud security deployment and the need to think out the box then share this article - I appreciate every Twitter mention I get if it helps educate another Linux user as to how to do things better.

Then get to the CSA website and join. It costs nothing and you'll learn a lot if you are an active participant. Tell them I sent you.