Newbie Corner – OpenStack Contribution

UPDATE: As part of the celebration for the OpenStack 5th Birthday, both Angus Lees and I organised a  ‘Newbie OpenStack Contribution’ workshop. Angus kindly shared the slides from the presentation, take a look at this excellent guide for new contributors. Check our Australian OpenStack User Group Meetup site for photos from the night.

I’ve been promising to do a newbie workshop for the Australian OpenStack User Group (AOSUG) on “Introduction to OpenStack Contribution” for a while now. Along with my AOSUG colleague Angus Lees (previously Google IP network developer, Rackspace and Openstack Neutron contributor, Docker and OpenStack container guru extraordinaire).

The best way to learn is not simply reading and trying yourself, nor is it just asking questions online (although all these are great!) Just like in school, the best way to learn something new is to get together with a bunch of people and work with each other to help, learn and follow a set of processes or work through some problems together. An OpenStack User Group is just the place for that!

As a way to kick-off, I thought it best to gather my notes, links and thoughts here. Now first caveat. I’m no programmer-superstar. My mate Angus yes, me – I’m a hack! I’m one of these strange people that just likes to figure out how everything works. I’ve figured out some basics, but am still learning. So who better to guide you? Now, allow me to let you in on some of the basics that (I think ) I’ve figured out about OpenStack Contribution…

This is not an OpenStack introduction, or beginners “What is for OpenStack?” If you’re asking those questions this is best addressed first off the mark! Secondly, why contribute? Well apart from the warm fuzzy feeling of participating in a project that will save all of mankind as well as many other altruistic reasons, some could also be more beneficial for yourself. Outcomes such as ability to create both code and documentation that benefit your own project or organisation, as well as technical self-development and career development. Of course these all go hand-in-hand with creating a better outcome for all involved in the OpenStack project too. Share and share alike!

The Lions peaks (British Columbia)

Basics To Begin With

For starters, we’re assuming that you have at least a concept of what most of these mean: git, gerrit, sed, vim, emacs, python, bugs, blueprints, review, workflow. If you don’t, click the links to find out more!

If you don’t have a working knowledge of all these, then you need to be at least willing to invest a little time to learn! You don’t need to be an expert, but by all means it pays to have a fairly solid knowledge of linux bash, git and some sort of terminal dev environment such as vim or emacs; as well as the overall OpenStack development workflow. You can always put in more time to learn more vim and git magic! You are also going to be comfortable having had many of these components running on your linux system. Whilst the OpenStack project can be contributed to from any system with the right setup, this configuration is out of scope for this guide. Most guides assume you are running Ubuntu linux local, or at least have that running in a VM or cloud resource that you do your development from. If you are using OS X or Windows, then please look elsewhere for guidance.

BC Mountains - Copyright Marten Hauville

One of the most complicated aspects for the newbie OpenStack developer, is the setup. Sure you may be keen to get cutting some Python code, adding your input to some document changes, perhaps weighing in on some Blueprints, or comment on some code… but “whoa, hold up there partner!” Let’s get some things straight, you need a bunch of setup and a basic understanding of the whole fairly complex, mind-bending process of OpenStack development workflow; and general aspects of open source code contribution first.

Documentation Contribution

Even if you’re just wanting to participate in documentation contribution, you need to understand the development process for OpenStack. Why? Because even the OpenStack Documentation projects require a knowledge of the dev process including git and gerrit, as well as some other Documentation Project specific tools such as Oxygen and tox.

Essentially whether you want to contribute to the OpenStack Documentation, or actually want to contribute towards actual OpenStack Code, Bug Triage, or Reviews; the best way to start is on the Documentation Project. This is because not only is the OpenStack Documentation Project tightly coupled with the OpenStack code development process, but also the Documentation contributors understand this can be a common place to start, so can be a little more lenient and helpful for the newbie OpenStack contributor. There is an excellent guide that jumps straight into the specific OpenStack code contribution, but it does assume you have some knowledge of the workflow, testing and gating process; which we’re assuming you’re not experienced with just yet.

With that in mind, following are some links on both OpenStack Contribution as a Developer and contribution towards the Documentation Project. Read them all!

OpenStack Contribution Link Swarm

OpenStack Docs Project
OpenStack Documentation HowTo
How to Contribute to OpenStack
Developer’s Guide – Getting Started
Contribute to OpenStack Documentation – Video Walkthrough
OpenStack Development and Contribution Workflow – Video
How to Contribute If You’re a Developer – OpenStack Wiki
OpenStack Documentation Source and Target Locations
Editing DocBook with vim

Chief (RP)1

Getting Started with OpenStack Contribution

The following list is taken from Developers Getting Started and Documentation HowTo for First Timers guides, both which should be reviewed and contain excellent and complimentary introductory setup and basic first steps. In summary, the basic steps to begin contributing to OpenStack (same setup for code contribution and document contribution) are:

  1. Register as a Foundation Member (“Individual Contributor” is best way to start)
  2. Agree to “Contributor License Agreement”
  3. Setup your Launchpad account
  4.  Create and configure your Private/Public Key Pair for Launchpad
  5. Setup and Configure your local dev environment (could be cloud or VM based dev environment, but we’ll just assume and call it “local” no matter what you’re using)
  6. Clone a repo
  7. Checkout a bug
  8. Contribute, make comments, include commit message
  9. Commit change
  10. Send to gerrit for review with git review
  11. Check, follow and action requests further review requirements in gerrit/Launchpad

Trips 06 - Coliseum Mtn Hike - 01 - Seymour Lk (249339943)

SSH Key Tips

The biggest challenge can be troubleshooting a Private/Public key mismatch with your local setup and what SSH Key is configured in Launchpad. It is recommended to create a specific key pair for OpenStack development that you use with both Launchpad and With this in mind, be sure to configure your .ssh/config file, replacing the host section with the correct host, based on the “Using Custom SSH Key” section in the Launchpad SSH Key Pair Guide.

Additionally, if you or your organisation is using OpenStack, at some stage you will need to leverage the OpenStack APIs. In particular you ideally want your apps to leverage the cloud capabilities of OpenStack, rather than just scripting or orchestrating workloads on top of OpenStack.  It is this approach that really unlocks the capabilities of cloud and OpenStack to their fullest potential. If you ‘re a Python programmer, using the Python Command Line Tools, Python API Bindings and Python SDK are arguably much simpler and more powerful to work with than OpenStack command-line tools or the REST API. It should be obvious that improving your Python skills to enable you to not only debug or develop OpenStack code, but to also leverage the capabilities of the Python API bindings and SDK; would then enable your OpenStack skills to be leveraged in a far more powerful way.

In fact the most powerful (and possibly least leveraged) component of OpenStack is the APIs. Not only are they very powerful component, they offer the most versatility and capability. The Horizon User Interface and the CLI in fact do not leverage as much deep capability as the APIs themselves! Clearly if you or your organisation are using OpenStack, leveraging the Python Command Line Client, or even better the API via the Python SDK is the only way to go.

Another beautiful BC Mountain - Copyright Marten Hauville

Further OpenStack Developer Information

OpenStack Lists -for our purposes, best ones to subscribe to are Community & OpenStack-docs
OpenStack Branch Model
Learn Gerrit Workflow in Sandbox
OpenStack Bugs – Wiki
OpenStack Contributor Documentation
OpenStack Python SDK

Addendum – Python Programming

If you want to take things further and get right into developing and OpenStack code contribution, start getting stuck into Python.

Python Beginner Programming Link Swarm

Python for Non Programmers – Books, Links & Tutorials
Python for Beginners – Getting Started
Best Way to Teach a Beginner to Program
Learning Python Guide
Best books/courses for learning Python – Quora
Learn Python
Learn to Program in Python – Codecadamy
Learn Python The Hard Way by Zed Shaw
Dive Into Python 3 by Mark Pilgrim
Python also has a strong set of introspection features as discussed in this StackOverflow answer
Go through the Learn to Program Coursera classes Learn to Program: The Fundamentals and Learn to Program: Crafting Quality Code by Jennifer Campbell and Paul Gries
The Python Tutorial found in the Python Documentation
O’Reilly Python ebooks
Python Essential Reference by David Beazley


All images are courtesy of Wikimedia Commons, except where noted are Copyright Marten Hauville. They are all taken from the beautiful mountains surrounding Vancouver, in British Columbia; and are in recognition of the recent OpenStack Vancouver Summit held there.

Tagged with: , , , , , , , , , , , , ,
Posted in OpenStack

Software Defined Data Centre

SDDC Stacks – HP, Red Hat, VMware, OpenStack, Avaya, Cisco are all in on it. They’ve all got one. The Software Defined Data Centre stack that is the talk of the town. The term “software defined” was coined a few years ago by VMware marketing and has since been adopted as the new “nom du jour” for the future of the Data Centre.

Tamme-Lauri tamm suvepäeval

But what is it? What does it mean? More importantly the question on all CIO’s lips is, what about SDDC do I need to be aware of in relation to my current DC strategy (and existing infrastructure sunk-costs)?


What is SDDC?

“The overarching vision of SDDC (and it is still very much a vision at this stage) is that control of the entire data centre is abstracted from the underlying hardware, with all physical and virtual resources made visible via software programs so that the provision of these resources may be done automatically or by programming according to the immediate needs of the applications and services running in the data centre.”

Essentially SDDC is a similar abstraction for the Data Centre as applied to SDN, which is the physical separation of the network control plane from the forwarding plane, and where a control plane controls several devices. In fact the control plane for the Software Defined Data Centre is really the most important decision for the enterprise: how the enterprise CIO assesses their data centre strategy, cloud management and their inherent future together presently; will dictate major financial outcomes, both write-offs and future costs. Arguably the control plane will define SDDC success, additionally this correlation of SDN and SDDC is not by definition alone. This article hopes to uncover why CIO’s ought to pay careful attention to DC strategic decisions, since both SDN and SDDC are more tightly linked than they may realise.

SDDC reality

Much of the required progress towards the future of SDDC has been achieved in the Data Centre through virtualisation of compute (hypervisor), storage and API automation. We will explore how SDN and deeper Data Centre software enablement will define the SDDC:

Historical Data Centre

Physical equipment; including servers, storage arrays, Routers/Switches, Core/Edge/Distribution separation, management and control by network/security teams. Requires high touch and a vast array of technical skill-sets to maintain; creating intensive labour costs.

Future Software Defined Data Centre

Ubiquitous and instant IT as a service, with centralised policy based management across all facets (compute, storage, network, security, cost/billing), across all providers to seamlessly enable business and users with full self service and no roadblocks. Ubiquitous whether in own DC, CoLo, or Public provider. Essentially this is the SDDC dream.


Confusion and concern around loss of control; with one foot in either historical/future DC camp (if you’re lucky) typically whole enterprise in past and a small dynamic R&D based DevOps team trying to push into the future, but over-worked doing EVERYTHING! Oh and what about all that sunk cost infrastructure. Aargh!

Bureau téléphonique parisien vers 1900

The question many CIOs are asking, is how to utilise existing physical network infrastructure whilst leveraging hybrid cloud and software control of the DC network, not just intra DC but INTER DC as well.

This SDDC challenge inherently lies within the Network. Arguably management, control and security are the biggest constraint in delivering the benefits of DevOps – Automation, Self Service and On-demand. Without these sensible constraints, CIO’s could easily open up the continuous development/integration floodgates.

The key cloud specific challenges in the Data Centre are:

  • allowing tenants to create own desired multi-featured and secure network
  • enabling customers (business users, tenants) access to Enterprise multi-site, dynamic, complex and secure network environments
  • integrating private/public/hybrid cloud with existing enterprise network in secure and compliant manner

Why Hybrid Cloud

Essentially the future of enterprise is Hybrid Cloud: where enterprise users will leverage a mix of AWS, Azure, OpenStack and other providers like Rackspace, Digital Ocean, etc. It is already evident in enterprises across USA, Europe and even Australia, with examples such as a major tier 1 banking organisation (name withheld) entrenched in AWS, but heavily delving into Azure and OpenStack. Key is that CIO’s adopt this hybrid cloud future as a mantra, then implement and develop these as strategic outcomes. The penalties for a single public cloud provider or single vendor strategy is severe: exacerbating IT’s reactive response to business demands, leading to further loss of control and security failure of IT.

The promise of the Hybrid Cloud enabled Data Centre, is that Enterprise will benefit from service provider economies of scale and scope, lower their cost through improved utilisation and improve application performance, resiliency and IT responsiveness. Some specific reasons enterprise are adopting or should adopt hybrid cloud strategies:

  1. Enterprise want to maintain competitive supplier power in their favour, if one provider fails to deliver (feature, capability, SLA) that offers competitive advantage, then the enterprise can utilise other cloud resources or providers. This gives more choice and range to the development teams.
  2. Simple price elasticity
  3. Audit, security & compliance; certain apps cannot go 100% to public cloud, due to security and audit compliance requirements, therefore must be maintained in a private cloud environment, but can leverage capabilities if public cloud offers required compliance or security components
  4. Intra provider bursting, locality, availability and bandwidth; cloud providers are all still juggling these and will continue to do so as they balance capacity with customer spend. This gives control of the app performance to the enterprise and not the cloud supplier
  5. Cloud Bursting; a key capability of hybrid cloud, enables enterprise customer to manage each cloud workload within the application, so it intelligently bursts the workload out to either the relevant public cloud supplier or private cloud or an appropriate combination according to predefined business rules

Cloud bursting is the future capability that all intelligent cloud vendors are focusing on, enabling deeper capability (across compute, storage, network and control plane) from within the application. This is the future as I see it. Focus and developing capability is evident, with management and policy control over these being key areas of development by vendors.

OpenStack is the enablement platform for Hybrid Cloud, due to its inherent openness and consequential public/private interoperability. It delivers on the key tenets set out by wise DC strategists on Cloud Management. By nature of being an open platform AND having strong established vendor support with involvement in direction and plug-ins, intelligent enterprises recognise this and are heavily exploring and developing their cloud strategy along these lines.

A key requirement of Hybrid Cloud in relation to the network, is that in order to deliver inter Data Centre and cloud bursting capabilities, by definition Hybrid Cloud interconnects from within and between the Data Centre. Therefore it is imperative that the standard protocols and capabilities of the Hybrid Cloud enabled Data Centre (VxLAN, OpenFlow) are supported across the Provider (MPLS), across the WAN or Internet (BGP), to the other Hybrid Cloud enabled Data Centres.

Business Challenge

© CoolIT Rack DCLC AHx Liquid Cooling Solution

The key business challenge, is to allow the enterprise customer to use their sunk cost existing hardware infrastructure, provide an open SDN and Policy framework (to manage security, users, groups, etc.), that works with existing customer environment.


The key tenet of SDDC is “abstraction”. Abstraction of existing physical hardware. How is this done?

Compute, yes box ticked. Whether you’re on VMware, KVM or Citrix this is pretty much a done deal. Storage, OK the separation of storage from the compute was achieved with storage virtualisation some time ago. Network, that is a little more tricky. Security and Control, even more difficult and not yet fully realised.

What about the existing network? Sure you use a single vendor across core, edge, and access (don’t you?!) So how do you abstract this? Cisco want you to rip it all out and replace it with all new gleaming Nexus and ACI. I don’t think the CFO will sign off on that.

DC to DC Interoperability

How do the Data Centres interoperate with one another to successfully enable Hybrid Cloud? OpenFlow and VxLAN alone will not suffice. What about migrating workloads and redundancy? These are also particularly important when assessing inter-country or regional SDDC strategy. Redundancy, failover, workload mobility and in-country data compliance all become incredibly complex in a hybrid cloud scenario.

Herein lies the issue. How do you effectively plan a DC fabric across regions and globally? Do you double dip (paying for existing infrastructure replacement and future technology) and implement a full ACI or NSX solution into every DC you operate including Backup or Redundant DC’s? What about CoLo’s and shared environments where you have no control of the Provider Edge or Telco? How do these interoperate with public cloud providers, how do you take advantage of hybrid cloud key benefits such as cloud bursting? By leveraging open standards, the enterprise can more easily achieve this. It is technically feasible, though financially impossible to utilise a single vendor solution across a hybrid cloud SDDC. Do AWS and Azure offer full vSphere, NSX or ACI integration? If they did it certainly would be expensive.

Some key SDDC interoperation tenets:

  1. No matter what your DC  fabric, you MUST interoperate to another DC or outside the DC, therefore it must be an IP Fabric; and
  2. Customer Edge and Provider Edge MUST reliably and efficiently manage encapsulated traffic (VxLAN, OpenFlow) over standard WAN Edge to Edge protocols (MPLS, BGP)

VMware NSX requires encrypted tunnels to remote sites, as the Data plane is NOT encrypted or secure. NSX controllers communicate across controllers and soft-routers on their Data plane via SSL-TCP. Overhead of managing SSL certs, which is inefficient, high network overhead connection-oriented. What if you move controllers, or have a DC or WAN link failure? These scenarios and the technical complexities are by nature a core component of your SDDC strategy.

VM Mobility and Multihoming

In depth understanding of MP-BGP, EVPN and how this interacts on the Layer 2 planes is beyond the scope of this document (and reaching the limits of my expertise), however suffice to say that VM Mobility (Live Migration) across the WAN and the consequential DC redundancies are very important to your business.


Layer 3 and IP are arguably the best DC to DC interconnect methodologies, MPLS/VPN is scalable however gains complexity. Still DC to DC compute live migration (vMotion, or other hypervisor) is an issue with traffic tromboning. This is due to historical lack of intelligence between the DC interconnect and the DC IP Fabric. VM and VxLAN with MAC and ARP address learning awareness solves this.

Interoperability, Security and Control

Martin Casado (ex Nicira, now senior VP VMware’s network and security business), talks about “The Goldilocks Zone”, an area in the DC which can:

“simultaneously provide context and isolation for security controls”

Of course this is VMware steering the conversation towards their desired market position, which is the Context vs. Isolation Hypervisor-centric view of the DC. However control and security of the SDDC is not so simple, that it can be managed and leveraged from the hypervisor control plane alone. Networks and WANs are complex. As reality dictates, the DC is comprised and always will be comprised of existing hardware and hardware running the DC network. How do you establish context and isolate in this real sense? What of inter DC and Hybrid Cloud scenarios? There will always be an Edge, DC infrastructure and host. There will always be hardware and software in the DC.

Key contextual areas that both isolate and manage the security and control are the Control and Data planes. Specifically for enterprise SDDC interoperability and security; the SDDC Control and Data Planes must address respectively:

Control Plane: Open cloud platform and hypervisor, open directory interoperable, policy based – set once & apply many; for successful Hybrid Cloud, ideally implemented across OpenFlow

Data Plane: Reliable, fast, standards based i.e. VxLAN, BGP, MPLS, EVPN protocol support

The Data Plane becomes increasingly complex unless you abstract to Group Policy. It is clear that in an open world (cloud, software, network or whatever), choice and flexibility are key, yet simplicity and centralised management for security, control and audit are increasingly a major requirement in modern enterprise technology. Two seemingly opposing philosophical approaches, tight management control versus a DevOps approach if you will.


When it comes to the Data Centre and moving workloads inter DC, performance is key. Interoperability at Wire Speed, Telco grade WAN connectivity really is the ultimate.

“… an NSX Edge VM can translate from the VXLAN overlay to physical network, the performance isn’t great. Therefore, VXLAN Termination End Point (VTEP) features are needed in the physical hardware of the switch.”

VMware NSX REQUIRES that you leverage hardware in their SDDC network solution, despite their entire go to market highlighting that you can do it all “in the software” distributed across the hypervisors. Hence their recent vendor partnerships. So you need a hardware SDN AND VMware NSX as a solution. Do you really want to rely on two vendors for ONE SDN solution inside your DC?


The advantages of “openness” to the enterprise are vast in particular where an enterprise has a number of vendor technologies already in the Data Centre. Typically enterprise has a vendor for compute virtualisation, several vendors for OS platforms, another vendor for storage, yet another for core networking, probably several for WAN links. openness allows the enterprise organisation to ensure their SDDC strategy “just works” with this pre-existing patchwork of vendor technology. What are “open” technologies and protocols? OpenFlow, VxLAN, NVGRE, EVPN, MPLS, BGP and OVSDB are all open or openly accepted and well-used standards. OpFlex, FabricPath, vPath and NSH are not.

What are “open” vendors or platforms? OpenStack as the cloud platform. Neither Cisco ACI nor VMware NSX as the SDN, as they do not address the need to efficiently work across heterogeneous Provider and DC. Neither are taking an open approach, they are both attempting to lock the customer into their expensive pricing model.

Network Aisle Front

“Many driver and outcome similarities between OpenStack and SDDC, particularly both abstract underlying infra (hypervisor, storage, network, orchestration, management, etc.) of DC; and release from restrictions of closed, proprietary infra stacks”


OpenFlow is considered the first and most utilised software-defined networking (SDN) standard.

OpenFlow is a Software-Defined Networking (SDN) protocol used for southbound communications from an SDN controller to and from a network device.”

Open vSwitch

Software based switch and router, as well as control stack for silicon based. Open vSwitch supports OpenFlow and OVSDB is a typical controller interconnect.


A Linux Foundation project which further develops SDN and promotes open standard for SDN and hardware integration through open community and open source code.

Open Networking Foundation

Industry member group that promotes and ratifies the OpenFlow protocol standard for SDN.


In an ideal scenario: Red Hat OpenStack, your existing storage vendor PLUS Red Hat Storage (ceph) and Nuage Networks SDN solution. Easy, open and fully supported by large vendors with solid experience in their respective fields. Product solutions which are at minimum second iterative release; and most importantly in use by other enterprise. All offer drop-in interoperability with most major vendors and leverage open standards. OpenStack now has  over 4 years growth and maturation, a large vendor ecosystem of over 121 contributing organisations and 2,130 project participants; offers brilliant interoperability with other vendors across hypervisor, storage, SDN and support. In particular Nuage Networks offers open SDN capability in the hypervisor, software and silicon; as well as interoperability back into sunk cost existing hardware infrastructure in the Data Centre and WAN. Their “open and unified approach” in combination with OpenStack is a winner in my opinion. All these approaches maximise openness and minimises the number of vendor SLAs that are needed to successfully deliver to your business.


With flexibility as a consideration, the two main players don’t look so crash hot:

Cisco ACI: Unnecessary proprietary hardware for what should be a software solution.

VMware NSX: Physical servers? Network control? Good luck… get ready to go create “new tenant VLAN on the physical switches and setup a new VNI on the physical VXLAN gateway and map that new VLAN segment to the VNI” each time.

Enterprise ought to seek out vendor technology providers who embrace openness over lock-in. The two SDN vendors whom are regularly mentioned in the tech press: VMware and Cisco; both take a strong lock-in approach to their SDDC stack, even their OpenStack integration. This is against the grain of openness and certainly a far more expensive and difficult scenario for any enterprise who wants to remain open to taking advantage of technology developments whilst embarking on an SDDC implementation.

Tagged with: , , , , , , , , , , , , , ,
Posted in Uncategorized

Install guide for QNAP TS-420 NAS with Ubuntu 14.04 64 Bit

I thought I’d put this together, when the automatic cloud install on my new QNAP TS-420 did not go according to plan. After hours searching the QNAP website, trying and failing to raise any QNAP technical support, searching google and reading the product manual back to front… I found either nothing or only conflicting information.

Problem, automatic cloud install hangs:myQNAPcloud

The following method has been tested and works (provided from start to finish, just in case the auto cloud install works for you):

1. Install physical HDs in each tray, fully insert into NAS
2. If you have a myQNAPcloud Cloud Key, plug-in and turn on the NAS unit, then go to in your web browser
3. Follow the steps to register and setup your device
4. You possibly will get to the screen, where you will see spinning wheels indefinitely, with Status LED flashes green quickly, LAN LED flashes orange, all four HD lights on steady green. Here your automatic cloud install journey ends (good luck if you get it working though)
5. Now we must do the following, sourced from:
6. Turn off the NAS unit and remove ALL drive trays.
7. Turn on the NAS unit (with no drive trays inserted) then, once the boot sequence completes and you get on short beep, then wait TWO to FOUR minutes (yes,be patient and wait)
8. Now you should have a fast green flashed Status LED, insert disk trays one at a time from left to right, waiting for each drive LED to go steady green.
9. Download and install the Qfinder linux app from:
10. Untar the package QNAPQfinderLinux-[version].[date].tar.gz as follows:
$ tar -xvf QNAPQfinderLinux-[version].[date].tar.gz
11. Refer to the Qfinder README text file, which instructs to open Terminal and install as follows:
$ sudo apt-get install libjpeg8:i386 lib32stdc++6 libsm6:i386 libgtk2.0-0:i386
12. Run Qfinder as follows:
$ cd Qfinder
$ sh ./Qfinder
13. Your NAS device should show on the screen (with a default name), run the firmware download and setup
14. When prompted, select Quick Configuration (for RAID 5), then enter the required Admin info (desired name, complex password, etc) when prompted. You can select Manual, if you require other RAID configuration types.
Install takes about 20+ minutes (depending on your HD size) and providing all proceeds well, you should see a friendly progress bar, Status LED flashing between red/green and drive lights flashing synchronously on/off in green.
15. Ta da! You now have a RAID 5 NAS, which you can manage from your Ubuntu 14.04 system.

Note: I’m experiencing a major issue with Qfinder.release taking almost 100% CPU, which I’ll research and advise what the issues is here.

Update: A reboot fixed the 100% CPU utilisation, so seems to be fixed. I noticed that the firmware version was only at 4.07 build from April 2014, despite the latest available being 4.10 released June 2014 and Qfinder setup process stating it checks for and installs the latest firmware version.

QNAP NAS Firmware

A manual install of the latest firmware is required, which can be completed by manually downloading and extracting the latest firmware, then selecting the img file from the NAS web interface in Control Panel, Firmware Update settings.

Tagged with: , , , ,
Posted in Uncategorized & Shipyard Ubuntu 14.04 Trusty Tahr install pack, ship and run any application as a lightweight container Docker is an Open Source container based virtualisation framework which provides features over and above LXC. Docker is also tightly integrated into OpenStack Icehouse, enabling a Docker container to be used in-place of a VM hypervisor (or additionally, inside a VM) in Nova with orchestration via a Heat plug-in.

Shipyard Open Source Docker Management

Shipyard is an Open Source docker Graphical User Interface created by Evan Hazlett using python django. It offers Multi-Host support, Container metrics and RESTful API.

The guides at and shipyard are not yet fully up to date for Ubuntu 14.04 Trusty Tahr LTS release, so I thought I’d share some insights from my recent install and configuration.

The quickstart at shipyard and the blog at is my main source, along with tried and tested troubleshooting.


Essentially docker is now in most config files and commands. is the new package name in Ubuntu 14.04, don’t use the old instructions referring to lxc-docker Install

A simple one-liner install now for Docker in Ubuntu 14.04:

sudo apt-get install


Add the relevant user to the docker group, which should already be created by default:

sudo usermod -a -G docker {{your_user}} is the new configuration file at /etc/init/ which uses upstart, so we now need to make configuration changes in the /etc/default/ Upstart Configuration file:

# Docker Upstart and SysVinit configuration file

# Customize location of Docker binary (especially for development testing).

# Use DOCKER_OPTS to modify the daemon startup options.
#DOCKER_OPTS="-dns -dns" ## note: config file is left verbatim as provided, but use --dns (double dashes) to configure dns successfully

# If you need Docker to use an HTTP proxy, it can also be specified here.
#export http_proxy=""

# This is also a handy place to tweak where Docker's temporary files go.
#export TMPDIR="/mnt/bigdrive/docker-tmp"

** Note: The original config file example shows the dns option with single dash “-“, however in Ubuntu 14.04 needs two dashes “--” to successfully configure dns. Refer to the articles here by docker and on Ask Ubuntu


So to add the correct shipyard configuration, edit the /etc/default/ file DOCKER_OPTS section as follows:

DOCKER_OPTS="-H tcp:// -H unix:///var/run/docker.sock"

Note as per the source shipyard quickstart guide: “…this will only bind to the localhost address. If you would like to access the API tcp port externally replace with”



Restart the upstart service to enable the configuration file changes:

sudo service stop
sudo service start

Shipyard Deploy

Then deploy shipyard on the docker host as follows (refer to the source shipyard quickstart and Shipyard Deploy for what is happening here):

sudo run -i -t -v /var/run/docker.sock:/docker.sock shipyard/deploy setup

It should then first look locally, if nothing found then pull down the latest shipyard repo build and configs. In about 5 minutes (once the downloads and builds are complete) you can then run the shipyard UI.

By William Cho [CC-BY-SA-2.0 (], via Wikimedia Commons


Shipyard as yet has no Docker host authentication security, so you must maintain a secure environment through a crafty combination of Docker host IP firewall (or OpenStack Security Group Rules) and user/group access control. To enable shipyard UI access, ensure you configure shipyard host security, enable TCP on port 8000 in your firewall (e.g. UFW) and Security Group Rules in OpenStack (if you are running Docker host in an OpenStack tenant).

Change your admin password to something complex after logging in. From the Site Administration accessed from the top right drop down “administration” menu, then select “Change” in users four items down the Site Administration page. Click on your user “admin” then select the “this form” hyperlink in the Password section. Be aware this is plain HTTP, so ensure you are making password changes only via a secure VPN connection.

Go forth and add other docker hosts to shipyard using the guide for configuring the Shipyard Agent. Docker can also be configured for IPv6, as opposed to the default IPv4.
Docker and Shipyard are both provided under Apache License, Version 2.0

Tagged with: , , , , , , , , , , , , , , , ,
Posted in Uncategorized

OpenStack Icehouse

RhB Ge 4-4 II Wiesener Viadukt

OpenStack Icehouse Release Due 17th April 2014

Here are the Icehouse Release Notes with Key New Features. Following is a summary of release highlights:

Neutron Networking: VPNaaS; FWaaS; NSX LBaaS; Cisco VPNaaS; OpenDaylight; One Convergence NVSD; NSX provided DHCP & metadata; NSW remote GW; Hyper-V Agent Security Groups; multitude of vendor ML2 plugins: Ryu, Brocade, Juniper, NSX, IBM, Bigswitch & Mellanox

Horizon Admin/User Dashboard: New Dashboard (Wireframe demo only) with dropdown menus, scope selection and breadcrumbs; nova live-migration; Plugin Architecture for python packages added dynamically; AngularJS module; create & update Heat stacks; daily usage across metrics; Cinder grow/expand/resize up volume; VPNaaS options including VPN Service, IKE Policy, IPSec Policy, IPSec Site Connection

Heat Orchestration: Docker resources; systemd interaction with heat services

Nova Compute: VMware vSphere API diagnostic details for VirtualMachine; XenServer HVM for linux guest; Docker container name

Cinder Block Storage: Full Backup/Recovery API; plugins for EMC, Lefthand, IBM

Celiometer Telemetry: network information collector from SDN; collect VMware vCenter Server data

As you can see, there is plenty of integration with VMware and other vendor products, proving that this is Open Source on a whole new level. It will also make your transformation and integration journey all the more easier!

Tagged with: , , , , , , , , ,
Posted in OpenStack

Enterprise Cloud

At a global scale, there are two major vendors that are well positioned in Enterprise I.T. within the virtualisation and OpenStack IaaS market. Of course HP and IBM are big players, but I believe their strategy somewhat splintered and certainly not “full stack”, as both VMware and Red Hat’s are. Comparing these two vendors only, there are vast differences. in particular their opposing OpenStack technology approach: VMware’s proprietary “open” integration and Red Hat’s pure Open Hybrid Cloud. These differences are even highlighted by VMware CEO Pat Gelsinger, who recently stated “On-premise cloud is a $2 trillion market … 92 percent of cloud is on-premise”. Gelsinger then went on to state that according to Gartner, this market would drop by 15% over the next 5 years. It is obvious that VMware have a lion’s share of this “on-premise cloud” market, so they’re clearly valiantly defending about $300 Billion potential loss of market. The new market entrants on the other side representing the “open” approach, are quickly building momentum with an attractive proposition.

BalticServers data center

SDN & Automation Focus

VMware is focusing on a two-pronged technology approach with NSX (Nicira) and vSphere integration into OpenStack. The amount of development contribution by VMware towards Icehouse release, particularly in Neutron is evidence enough. However what VMware and their partner network lacks is width and depth of OpenStack integration, particularly in consulting and support. VMware remain a proprietary and costly cloud solution. Although they are a strong contributor to OpenStack, they are not offering an open cloud solution. Of course VMware are about to release their vSAN solution, which takes an even more proprietary approach.

Tsunami by hokusai 19th century

Open & Full-Stack

In contrast, as both a strong Open Source and OpenStack contributor, Red Hat offers a full stack integrated proposition with key products: Red Hat Enterprise Linux (RHEL) OpenStack, CloudForms, Enterprise Virtualization, Storage and associated Services (training, consulting and support). Vital to their Open Hybrid Cloud strategy is a strong association with OpenStack and related OSS projects such as KVM, QEMU, libvirt, oVirt (potential vSphere alternative) and Gluster (including GlusterFS, bareos, Distributed Volume and HadoopVOL). In my opinion Red Hat are in an even stronger position than VMware, considering their Open Source stewardship and relationship with key OSS projects. This history and strength of cultural relationship with OpenStack project team developers will go a long way.

The key to an enterprise organisation’s successful private and hybrid cloud strategy is in utilising an Open Source solution. In leveraging Open Source in the enterprise organisation will sustain choice, mitigate vendor lock-in and minimise risk of high licensing and maintenance costs. This “open” choice will also critically enable the integration of developers and operations within an enterprise to leverage automation in a truly collaborative way. It is this vital “Lean” and innovative concept that many enterprise organisations seek and will potentially sacrifice if they maintain status quo.


Enterprise Strategy

The majority of Enterprise organisation’s have to-date invested in virtualisation (typically VMware), which for the enterprise has reached end of useful life with respect to realising key business benefits. We were typically having these virtualisation business benefits conversations over 9 years ago. Virtualisation alone will no longer offer business any competitive advantage. All your competitors already have implemented at least the same or deeper into their environment than you. Cloud IaaS is well into the maturity level that it is already delivering strong competitive advantage, particularly where fully integrated into product development and core business innovation. A recent study shows that “56 percent of enterprises consider cloud service adoption to be a strategic business differentiator”, which shows that 44% are blind to the reality that it IS a business differentiator. Over the next 3 years, “enterprise spending on the cloud will reach $235.1 billion”. That is solid investment in business innovation by all your competitors.


Napkin Sketch

Back of Napkin Assessment

There have been some shady TCO numbers quoted  by a market analyst bouncing around the Twitternet recently,  however I’m not about to complete a head-to-head TCO analysis. I will offer a “Back of Napkin Assessment”, that should highlight some salient points around costs and capabilities when it comes to cloud.

Key questions to ask:

  • What overall cost (from start of virtualisation to now)?
  • Benefits realised, opportunity/cost benefit?
  • Virtualisation business benefits now typically completed, therefore little more to gain on pure virtualisation.
  • What cost over next 5 years to maintain status quo?
  • How about cost to expand to full cloud (private/hybrid) leveraging existing vendor?
  • Not just orchestration or data centre management. Must examine the total ongoing cost.

Alternative A: maintain status quo, continue sinking more ongoing and additional cost into existing vendor = increasing licence and maintenance costs & only solves automation = locked in, expensive = choice only to be in-house virtualised, public limited, expensive and not dynamic/scalable

Alternative B: move as much as possible to public = locked in, loss control, expensive consultants, high risk

Alternative C: open hybrid cloud = leverage sunk costs (existing environment where sensible), leverage enterprise grade storage, compute, hypervisor with SLAs, no lock in, lower costs, ability to choose to be in-house virtualised, private cloud/co-lo, hybrid, public = good choice

Approximate Cost: (per socket pair) – Red Hat Enterprise Linux/OpenStack/Virtualization/CloudForms = $5K vs VMware vSphere, vCenter (no OpenStack SLA) = $10K

Recommended Enterprise approach: greenfield in R&D or other development area, particularly good at recognising benefits and cost savings. Once proven, enable strategic implementation to other significant areas of benefit.


Choice & Future Cost

The primary factor is choice and ongoing cost. With an open hybrid cloud, such as OpenStack you have a valuable commodity called choice. Choice of hypervisor, storage back-end, orchestration, or DevOps tools. Choice of vendor too, even the big players such as NetApp and Cisco are all over it. Service Providers are already investing. Choice is what IT decision makers want and an open cloud platform is central to this.

A pure VMware cloud solution is arguably higher in ongoing cost and limiting with future choice. I’m quite excited and look forward to even further integration into OpenStack of CloudForms, oVirt and Gluster projects, as well as further development over and above the already strong contribution by Red Hat in Heat, Marconi and Sahara OpenStack projects. All of which will open up vast opportunity for dynamic elastically scalable, orchestrated, clustered databases for Big Data workloads.

The key here is choice, once operating a stable enterprise fully supported open cloud, you then have the choice to use any other vendor or open source options in your cloud more easily. Of course there are also an ever increasing number of providers both global and local offering public and managed private/hybrid OpenStack based cloud as a product. Red Hat are further integrating with other Public Cloud providers. This all only increases your choice.

Tagged with: , , , , , , , , , , , , , , , ,
Posted in Uncategorized

Murky world of Privacy and Data Sovereignty

With the release of the NSW Government’s Cloud Services Policy & Guidelines paper today, a number of issues are evident. Particularly around Data Sovereignty and a total lacking of any semblance of agency procurement or supplier guidance. I understand this is not intended to be an IT strategy document, however it is meant to be a policy paper and procurement guide for both agencies and suppliers. Hopefully the following perspective will explain why it fails on both counts.

Incidentally, it amazes me with the release of this paper, that the head of the Australian Information Industry Association (AIIA) was quoted welcoming the paper as an:

” ‘as a Service’ Module to support procurement of cloud services”.

This sad 19 page government policy paper, is described as an ‘as a Service’ Module? Does it plug directly into the Amazon and OpenStack API’s? Do I get a large serve of DevOps with that, to go?Bill Lumbergh: Yeah, if you could just go ahead and put that in the cloud

First issue is that the NSW Government’s paper briefly mentions basic NIST definitions of Cloud Service Models: SaaS, PaaS and IaaS; however does not mention any specifics about how these could be leveraged or what data security and related legal aspects need to be considered around these. Potential for better data security improves as you move along the Cloud Deployment Model from SaaS to PaaS to IaaS.


There is absolutely minimal reference in the paper, to important cloud components such as Deployment Models and no mention whatsoever of Essential Characteristics. Where is the assessment and statement on Public, Private; or Hybrid: relating to underlying IT strategy, business drivers, technology strategy, risk appetite, legal and security requirements? Surely an IT Policy paper should be based on an overarching IT Strategy? Can I at least get some due diligence? It isn’t like data sovereignty in the cloud and data privacy are new, this concern has been around for a while.

Even key related government papers, such as Cloud Security documents from Defence Signals Directorate (DSD), Australian Federal Government Cloud Policy Guides and ACMA Chariman Chris Champan mention that data security issues that are highly important in any cloud implementation. Why then, does the NSW Government paper overlook these and other basic essential NIST cloud defined components? The only references are to outdated (in perspective and approach) IT documents originating from the NSW Government within the Cloud Services and Policy Guidelines document. Shouldn’t a government policy document be referencing basic Cloud Security requirements as recommended by DSD, Federal Government and Industry Bodies?


The second issue, is that there are many excellent resources available that have not been referenced or utilised, such as the recently released “Data Sovereignty and the Cloud” paper from University of NSW, that clearly outlines some major components that must be assessed in relation to Data Sovereignty and cloud. All of which are totally missing from the NSW Government’s paper. Data Sovereignty, security and privacy of data are serious IT issues that have major impact on the privacy and rights of citizens. A basic requirement is identified in the NSW University’s Data Sovereignty document as a “clearly articulated policy for cloud data location or jurisdiction”. Fail.

Security Camera Install Corner Of Building

“Security Camera Install Corner Of Building” by num_skyman from

Third in the complaint list, is the “legalese” and obscurity of the NSW Government’s position. The paper is more focused on use of legal language than actually taking a clear position on cloud and the procurement model as such. As well as the lack of clarity, it is evident those involved in creating this document don’t quite “get it” with the big picture of cloud technology. I know first hand, from someone who was involved in the process and who actually knows quite a lot about cloud. The nature of that person’s comments were expressed as frustration at many decisions made without clear understanding and an often ill-informed perspective from government decision makers on cloud technology.


I’m sure there were many experts consulted, committees sat, solicitors paid and ministers stamped to get the document released. But I really don’t think those responsible for this paper get the “big picture”. This is confirmed in the preparation for this paper, the confusion between what is private cloud or not. For example in the reported following statements made earlier in the year by the Executive Director of Strategic Policy at DFS William Murphy:

“The cloud policy …ultimate cloud goal, which is to have agency ICT environments fully migrated to a private Government cloud by the end of 2015.”

Ironically the same article lists the five NSW Government cloud initiatives, which are nearly all multi-tenant, mostly shared PaaS or SaaS – certainly not private cloud:

  1. Messaging-as-a-service and desktop-as-a-service proof of concept trials to be run by ServiceFirst;
  2. Department-wide ERP consolidation into the cloud at the Department of Trade and Investment, Regional Infrastructure and Services;
  3. Email-as-a-Service implementation at NSW Fire and Rescue;
  4. Multi-tenanted email-as-a-service at NSW Businesslink; and
  5. Infrastructure-as-a-service at NSW WorkCover.

To clarify my view purely from an IaaS cloud perspective, Data Sovereignty relating to the Government’s paper and Private, Public or Hybrid cloud:

  • Private – you know where your data is, providing you don’t outsource storage
  • Public – you have no idea, even with selecting a so-called in-country Public Cloud, your data can get cached and stored outside of that country such as with CDN, you have little control of data sovereignty
  • Hybrid – you can manage according to data sovereignty requirements and concerns, providing you manage data sensitivity through meta-tagging and maintain control of data storage

The NSW Government paper makes no reference whatsoever to any of the above situations or any explicit requirement for Data Sovereignty. There are some vague references to compliance with data legislation, but to “comply with regulations” in general means little in reality. The paper should be expressing clear and concise position and requirements relating to how data is managed in the cloud environment, as well as the specific responsibilities of the government and suppliers. In fact the self-reported requirements brief, taken from the NSW Government ICT Board meeting notes for the policy paper was for:

“The Policy and Guidelines provide a clear policy statement about NSW Government use of cloud solutions and taking advantage of the flexibility and agility that they provide,”

Clearly missed that goal then.


Specifically, the NSW Government paper makes vague allocations of responsible parties to:

  1. “Government Agencies”, and
  2. Supplier

Where then, is the guidance and responsibility realistically going to be held (assuming a standard government tender process)? With the supplier? Those with tender or bid experience know that the less specific the Tenderer is about the requirements, the more ability the potential Supplier has to dictate outcomes. Conversely according to the wording in the NSW Government paper, it is understood that the NSW Government has pushed all data sovereignty requirements, compliance, auditing and management down to each agency or supplier. Not centrally controlled or dictated from a central IT body. Cloud is a new way of using, procuring, providing and managing IT: from decision making, through to managing, auditing and purchasing. Old models and methods usually will not work (or be a huge waste of resources). This has not at all been considered, which should have been set prior to publishing a procurement policy paper.

The laws relating to technology and privacy are rapidly changing, conflicting legislation between nation-states and even circumvented at the bequest of government agencies across borders under the premise of “freedom”. It is nearly impossible for any supplier or individual agency to keep abreast of multiple and conflicting legislation across multiple countries. But this is effectively what the NSW Government paper is doing.

In a world of conflicting regulations across the globe, the new frontier of information and power relationships and degradation of traditional nation-state power: that which controls the information has the power. Add to the mix a sprinkling of NSA/PRISM/WikiLeaks espionage, Syrian and Chinese targeted hacker warfare (cyberwarfare), Big Data and you have a major issue. It is not just the data that governments collect, no matter what your perspective on that issue is. It is whether they are responsible and knowledgeable enough to maintain the security of that data and ensure it doesn’t fall into the hands of some other entity to misuse that information.


National Security Agency

“Capitol Building” by Damian Brandon from

How is each agency or even each supplier as the NSW Government paper insinuates, to effectively provide appropriate resources to successfully deliver the specified data sovereignty requirements, compliance, auditing and management? Successful data management and compliance is a hefty highly skilled and labour-intensive role, let alone auditing and managing during and after-the-fact. How can anyone, including our government and legal system ensure compliance with Privacy Legislation regarding our data that is held and managed by our government institutions in this situation?

What the NSW Government really should be doing, is dictate that all sensitive data is to be contained within Australian borders. Therefore complying with Australian Privacy Legislation. I actually think that the EU got it right, when they enacted legislation that essentially ensures data sovereignty within the borders of each EU nation. The EU have taken an arguably more sensible and liberal-minded:

“…citizen-centric approach to data protection and privacy”

It is my opinion that if data is sensitive and needs to comply with particular privacy laws relating to that particular country, then that data must remain in that country from where the privacy laws originate. Of course the opposite is arguable, that these European in-country data sovereignty laws restrict the cloud market and are restrictive to business. This is the only way to ensure that level of control and auditing required to comply with that law. Of course the knock-on effect of this outcome would be that large global corporations are slightly disadvantaged and local niche cloud operators are slightly advantaged. Additionally commoditisation of cloud stifles innovation and competition. Supporting the local economy and innovation rather than large global corporations. There’s a novel idea!

Of course there is always the possibility to separate confidential private data that must comply with privacy regulations and other data that has no legal privacy requirement. That latter data can go wherever it likes. You can always just download this fantastic new app. Problem solved.

Disclaimer: I am not a solicitor and the opinions expressed here are my own. I am an independent IT professional and have written, worked with and negotiated on many large IT&T contracts. Comments, debate and fruitful discussion are welcome.
Tagged with: , , , , , , , , , , , , ,
Posted in Uncategorized
Twitter Feed
%d bloggers like this: