The Road to Network Configuration Automation

The fast growth in the telecom industry, not only in size and value, but also in the types and volume of traffic that goes through the networks and the complexity of the systems, has made network configuration ever more demanding on operators, because it holds the key to the quality of the network, how it is governed and operated. This is made more critical when 5G is increasingly becoming a reality. Much promise has been made about 5G’s lead use cases like enhanced mobile broadband, massive IoT, mission critical communication, etc. However, they also raise unprecedented demands on network properties, including high demand for edge computing, extremely high traffic volume, and extremely low latency. Further SDN/NFV inherently have high network configuration automation and management and demands the same from physical networks. This would need consolidation of physical network configuration and support NFV orchestration with its automation requirements.

Long gone are the days when a group of engineers sat around a table, pen in hand, agreeing on the configuration parameters, then writing a few command lines. This would work in a static network when configuration was done once and then the network was left run by these rules for a long time. With the fast-moving dynamic network of today and especially of tomorrow, these key shortcomings of manual configuration management are evidently not able to cope with the network complexity:

• It is time consuming: dynamic networks need response to issues in minutes, not hours;
• It is prone to errors: the more complex the networks get, the more likely some aspects get overlooked, e.g. some system vulnerability that may be open to attacks, or subprime performance of some network components;
• This is a piecemeal approach, attempted to configure bit by bit, while the dynamic networks need a wholesale solution, from device management, data collection and processing, to trouble detection and problem solving;
• It cannot scale or customise, as every new network setup, e.g. a new vendor’s equipment integrated, needs a new set of command scripts.

In contrast, the automated configuration management solutions can:

• Save time and cost: this is especially in cases of configuring and managing mash-up cross-generation networks supplied by multiple vendors. However, this does not mean companies do not have to invest. Long term cost, especially OPEX, savings can only be realised if service providers are ready to invest in leading automated configuration systems;
• Minimise human errors: this will become even more apparent in 5G era when the complexity will be added with slicing of network, and for most operators it will be a co-existed mix with legacy networks for many years;
• Detect and solve problems more quickly: using audit function can pro-actively detect abnormal network behaviours almost real-time, and alarm the system and respond in the amount of time it takes the system to run the automated commands;
• Easily customise: by definition, automated configuration is done by software, and can be customised based on the use case and network context.

However, in order to properly implement an automated configuration, a step-by-step approach needs to be adopted. To start with, the automation set-up needs to have an automated system backup mechanism in place, whereby provisions are defined and maintained to regularly back up all the critical parameters for system elements as well as perform version control. The set-up should also be able to alert the management system when a backup fails to initiate or complete.

The next step to implement automated configuration is to identify network parameters and analyse how they affect the security, performance, availability, and other network service factors. All these parameters and their interactions with the network components should be documented for input to the configuration solution.
The most critical step in implementing network configuration automation is to automate network auditing. The auditing system should be capable of:

• Regularly monitoring network elements;
• Estimating and warning impact on service parameters impacted by configuration changes;
• Grouping audit rules to categories to cater for increasing demands;
• Taking remedial actions when system breach happens including rolling back to the last secure configuration;
• Generating detailed breach reports

To complete the cycle of network configuration automation, the management system should also be capable of pushing out automated configuration to all the devices in the network. This is particularly meaningful when large numbers of devices are deployed in a network that all need to meet the same level of security and function requirements, for example professional mobile service, or industrial IoT networks. Again, the roll-back capability is critical so that, in case configuration is not completed on certain devices, they can be automatically rolled back to the previous functional version, instead of being left in a limbo.

The industry readiness of automated network configuration has gone way beyond blueprint on paper, and it is encouraging to see strong interest from both the supply and demand sides. On the supply side, leading technology vendors already have commercial solutions to offer, for instance the HOBS Connected Devices Management solution from TCS. The solution covers the configuration management for Telco’s as well as Enterprise networks. It has shown considerable network quality and performance improvements within months of deployment in leading Telco networks. On the demand side, more and more service providers are embracing automated configuration management, for example increasingly RFIs and RFQs are including automated configuration in their requirements. There is no surprise in the increasing enthusiasm though as we have observed that automation will enable faster deployment of new services, therefore faster time to market, and faster return on investment.

The US digital divide – does anyone have a clue what’s going on?

Depending on who you listen to the severity of the digital divide varies greatly. But with so many different opinions, how do you actually know what is going on? And if you don’t have a clue, how can you possibly solve the problem?

This topic is one which carries a particularly heavy amount of political charge, for good reason might we add, and is not limited to the US. Digital inclusion is a buzzword and objective associated with almost every nation due to the increasingly complex and embedded role digital is having in our lives. Every society should be considering strategies to ensure everyone is taken forward into the digital utopia, but the success of such initiatives is questionable.

Here we are going to have a look at the US market, but not question how successful the political administration and telcos have been at closing the gap, but whether they have the right foundations in the first place. To tackle a problem, you have to actually know what it is, and this is where we feel the industry is failing right now.

First of all, let’s start with the obvious issue. The telcos clearly favour the denser urban environments due to the economics of connectivity; providing customers the internet is an expensive job in the beginning. Not only do you have to buy the materials and the equipment, you have to process planning permission, deal with lawyers and do the dirty-job of civil engineering. But, you also to have to have the confidence customers will buy services off you. When there is such a sparse residential population in a region, it can be difficult to make the equation add up.

This is the issue in the US, and perhaps why the digital divide is so much bigger than somewhere like the UK. The land mass is substantially bigger, there are a huge number of isolated communities and connectivity tariffs are much more expensive. The problem has been compounded every time connectivity infrastructure improves, creating today’s problem of a digital divide.

But, here lies the issue. How do you solve a problem when you have no idea what the extent actually is?

An excellent way to illustrate this is with a road-trip. You know the final destination, as does everyone trying to conquer the digital divide, but if you don’t know the starting point how can you possibly plan the route? You don’t know what obstacles you might encounter on the way to Eden, or even how much money you will need for fuel (investment), how many packets of crisps you’ll need (raw materials such as fibre) or how many friends you’ll need to share time at the wheel (workforce).

The industry is trying to solve a problem when it doesn’t understand what it actually is?

The FCC don’t seem to be helping matters. During Tom Wheeler’s time in-charge of the agency, minimum requirements for universal broadband speeds were tabled at 25 Mbps, though this was then dropped to 10 Mbps by today’s Chairman Ajit Pai. Rumours are these requirements will once again be increased to 25 Mbps.

Not only does this distort the image of how many people have fallen into the digital divide, it messes around with the CAPEX and OPEX plans of the telcos. With higher requirements, more upgrades will be needed, or perhaps it would require a greenfield project. Once you drop the speeds, regions will once again be ignored because they have been deemed served. If you increase these speeds, will the telcos find a loophole to ignore them, or might they unintentionally slip through the net?

Under the 25 Mbps requirements it has been suggested 24 million US customers, just over 7%, fall into the digital divide, though this is an estimate. And of course, this 25 million figure is only meaningful if you judge the digital served customers as those who can theoretically access these products.

A couple of weeks ago, Microsoft released research which suggested the digital divide could be as wide as 150 million people. We suspect Microsoft is stroking the figures, but there will certainly be a difference because of the way the digital divide has been measured.

In the research, Microsoft measured internet usage across the US, including those who have broadband but are not able to surf the web at acceptable speeds. Microsoft considers those in the digital divide as those who are being under-served, or have no internet at all, whereas the FCC seems to be taking the approach of theoretical accessibility. There might be numerous reasons people fall into the digital divide but are not counted by the FCC, price of broadband for example, but this variance shows the issue.

Another excellent example is in Okta’s speed tests across Q2-Q3 which have been released this week. The Okta data suggests a 35.8% increase in mean download speed during the last year, ranking the US as the 7th best worldwide for broadband download speeds. According to this data, average download speed across the US for Q2-Q3 was 96.25 Mbps. This research would suggest everything is rosy in the US and there is no digital divide at all.

As you can see there is no consolidated approach to arguing the digital divide. Before we know it campaigning for the next Presidential Election will begin and the digital divide will become another political tool. Republican’s will massage the figures to make it seem like the four-year period has been a successful one, while Democrat’s will paint a post-apocalyptic image.

And of course, it is not just the politicians who will play these political games. Light Reading’s Carol Wilson pointed out Microsoft has a commercial stake in getting more bandwidth to more people so that more people can access their cloud apps and make them more money. Should we trust this firm to be objective in contributing to the digital divide debate? Even if the digital divide is narrowing, Microsoft will want to paint a gloomy picture to encourage more investment as this would increase its own commercial prospects.

The issue which is at the heart of the digital divide is investment and infrastructure. The telcos need to be incentivised to put networks in place, irrelevant as to the commercial rewards from the customer. Seeing at this bridge is being built at a snail’s pace, you would have to assume the current structure and depth of federal subsidies is simply not good enough.

The final complication to point out is the future. Ovum’s Kristin Paulin pointed out those in the digital divide are only those who are passed by fixed wireless, not taking into account almost every US citizen has access to one of the four LTE networks. Fixed Wireless Access will certainly play a role in the future of broadband, but whether this is enough to satisfy the increasingly intensifying data diets of users is unknown. 5G will certainly assist, but you have to wonder how long it will take to get 5G to the regions which are suffering in the divide today.

Paulin points to the affordability question as well. With the FCC only counting those US citizens who cannot access the internet in the digital divide, who knows how many citizens there are who can’t afford broadband. A New York times article from 2016 suggested the average broadband tariff was $55 a month, meaning 25% of the city, and 50% of those who earned under $20,000 would not be able to afford broadband. The Lifeline broadband initiative project is supposed to help here, but Paulin politely stated this is suffering some hiccups right now.

If citizens cannot afford broadband, is this even a solution? It’s like trying to sell a starving man, with $10 in his wallet, a sandwich for $25. What’s the point?

Mobile broadband might well be the answer, Nokia certainly believes a fibre network with wireless wings is the answer, though progress is slow here. Congestion is increasingly becoming a problem, while video, multi-screen and IOT trends will only make the matter more complicated.

As it stands, the digital divide is a political ping-pong ball being battered as it ducks and dives all over the landscape. But, the US technology industry needs to ask itself a very honest question; how big is the digital divide? Right now, we’re none the wiser, and it will never be narrowed without understanding the problem in the first place.

Ursula Burns officially made Veon CEO, at last

Eight months after losing its last CEO, telecoms group Veon has decided to stick with Chairman Ursula Burns in the dual role.

When Jean-Yves Charlier suddenly had his security pass revoked back in March, there was a curious silence on the matter of his replacement. That silence continued for so long that, distracted by the passing of the seasons, everyone forgot Veon didn’t officially have a CEO.

Today, what vestigial speculation there may have been was finally put to rest when Veon anointed Ursula Burns as CEO. To call this a bolt from the blue would be an exaggeration as Burns was already Chairman and had been covering the CEO role since Charlier cleared off. But sometimes these things need to be rubber-stamped.

“I am honoured to be appointed Chairman and CEO of Veon,” said Burns. “The company operates in a diverse group of markets, with growing populations and rapidly increasing smartphone ownership. This clearly presents a host of growth opportunities for Veon as we seek to build on the positive momentum that we are seeing across the business. I look forward to continuing to lead Veon towards more success and increased shareholder value.”

“The Board has been impressed with Ursula’s performance and leadership of the company,” said Veon board Director Julian Horn-Smith. “The management team are clearly working well together and focused on delivering against strategic priorities. Ursula has led Veon through a major transaction in the sale of its Italy joint venture for $2.9 billion and overseen a period of solid quarterly operational performance. We are confident that with her as Chairman and CEO there will be further improvements across the business.”

There you have it. To be fair, actually getting someone to do the job for eight months is a fairly rigorous interview process, so Horn-Smith and his fellow board member can feel pretty confident of having made as informed a decision as possible.

Fiber to the X Fundamentals

A complete overview of principles, technologies, architectures and business models for future networks

At increasing speed, we are evolving into a global digital society. This is profoundly transforming the way we live, work, learn and thrive. Fiber to the X (FTTX) will be required to offer bandwidth and low latency needed for gaming, internet, IoT, smart cities, driverless cars etc.

In this eBook you will learn about:

  • The gigabit society- history and future of fiber to the X
  • Science fundamentals made simple: light, multiplexing, connectors, adding capacity
  • Network architectures: options, benefits and trade off considerations
  • How to make an FTTX business plan work
  • Real-world case studies with global and regional insights

UK gov approves £2.4m of surveillance gear to Saudi Arabia

The UK government has approved the sale of telecoms surveillance equipment to Saudi Arabia despite its alleged state murder of journalist Jamal Khashoggi.

Not content with continuing to sell arms to a state with a – we’ll say, “problematic” – human rights record, British ministers granted five licenses to sell ‘telecommunications interception equipment’ to the kingdom.

Political news website Politics Home uncovered the deals following Freedom of Information requests.

The three deals, worth £2.4 million, were signed off by the Department of International Trade (DIT). The department is led by minister Liam Fox, a former Defence Secretary.

A DIT spokesman told Politics Home:

"Risks around human rights abuses are a key part of our licensing assessment and the government will not license the export of items where to do so would be inconsistent with any provision of the Consolidated EU and National Arms Export Licensing Criteria.

All export license applications are considered on a case-by-case basis against the Consolidated Criteria, based on the most up-to-date information and analysis available, including reports from Non-Government Organisations and our overseas networks."

Saudi Arabia is one of Britain’s top destinations for the export of British-made military equipment; including weapons and fighter jets. The deals have often come under scrutiny but Western nations continue to maintain the kingdom is an ally.

With the murder of dissident journalist Jamal Khashoggi inside the Saudi Arabia embassy in Istanbul, alleged to have been ordered by the crown prince himself, any equipment which could be used to discover and silence critics should be of great concern.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located IoT Tech Expo, Blockchain Expo, AI & Big Data Expo and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

Aussie watchdog sniffs around TPG and Vodafone merger

The Australian Competition and Consumer Commission (ACCC) is having a closer look at the AUS$15 billion TPG and Vodafone merger, with the signs looking rather ominous for the pair.

After initially being rumoured in August, the merger was confirmed with the pair targeting convergence trends to source fortunes down-under. Neither telcos has been tearing up trees in the market, TPG’s recent financials revealed 0.5% growth over 2018 while Vodafone posted a first-half net loss of AUS$92.3 million in July, though this merger could have been viewed as a means to become more profitable.

However, the ACCC is citing competition concerns in both the mobile and the broadband business units. When the watchdog starts to get twitchy, it doesn’t necessarily bode well.

“Our preliminary view is that TPG is currently on track to become the fourth mobile network operator in Australia, and as such it’s likely to be an aggressive competitor,” ACCC Chair Rod Sims said.

“Although Vodafone is currently a relatively minor player in fixed broadband, we consider it may become an increasingly effective competitor because of its high level of brand recognition and existing retail mobile customer base.”

As separate companies, TPG has the broadband heritage with ambitions in the mobile game, while for Vodafone it is the opposite. Any regulator or competition authority which is starting to see organic diversification will start to get excited, though this merger will effectively kill off any promise of additional players in the individual connectivity segments, as they would lean on the new partners strength. The promise of four separate mobile and broadband telcos is disappearing in front of the ACCC’s eyes.

The question which remains is whether Australia needs a fourth player in the mobile and broadband segments for the market to remain competitive? There are of course pros and cons to both sides of the argument, though risk-adverse public sectors bodies tend to believe more providers means a better outcome for the consumer due to competition.

This is certainly what appears to be happening in Australia, though this is a country which needs to operate its own rules.

In markets like the UK, fewer providers might not mean less competition. The land mass which needs to be covered is comparatively small, therefore it is not out of the question to have genuine national providers, which can offer 90% or greater coverage. This means choice for the consumer and the providers have to scrap for attention and subscriptions.

However, Australia is massive, incredibly varied and contains some very hostile environments; not exactly the perfect playing field for telco expansion and greenfield investment. The risk of localised monopolies emerging are greater, due to the final burden of increasing coverage or entering into new segments. With this in mind you can see why the ACCC is getting a bit twitchy.

Of course, consolidation means a bigger subscriber base, greater revenues and therefore increased CAPEX budgets. Investors and management teams have more confidence in being able to upsell services to existing customers, therefore the risk in investing in new infrastructure or upgrades is decreased.

It is six of one and half a dozen of the other when you look at it, though that will come as little comfort to the TPG and Vodafone executives now facing the scrutiny of the ACCC.

Qualcomm has one last 2018 5G fest with TIM in Rome

Qualcomm’s events team has had a frantic end to the year, culminating in the demo of a 5G NR video call over millimeter wave in Italy.

The venue was TIM’s freshly unveiled 5G Innovation Hub in Rome. In attendance were Qualcomm and TIM, of course, but also a host of kit and device partners as well as the great and good of Roman public life, including its Mayor Virginia Raggi (pictured). Most of the presentations were in Italian, but they sounded pretty cool, and there were also a bunch of demos from the various partners.

The highlight of the day was what was claimed to be Europe’s first 5G NR video call, completed over the TIM network using millimeter wave. It made use of a Qualcomm modem and some Ericsson kit. The demo is being positioned as ‘a new milestone that will soon lead to the commercial use of 5G mmWave technology in Europe.’

“When we started to define the strategy and the development plans for 5G, we immediately realized that such a massive challenge could not be faced without the support of a wide range of partners committed to the same goal,” said Mario Di Mauro, Chief Strategy, Innovation & Customer Experience Officer at TIM.

“We therefore proposed Qualcomm Technologies set up a place where work on the new 5G services and every business idea could find a quick realization thanks to the support of leading international technology players, innovative partners and start-ups from the local and national ecosystem.”

“Qualcomm Technologies is very excited to be part of this initiative and we would like to congratulate TIM on the significant momentum they have achieved in a short time with the Hub,” said Enrico Salvatori, President Qualcomm EMEA. “A great example of innovation is today’s demo showing the first 5G mmWave mobile smartphone form-factor mobile test device powered by the Snapdragon X50 5G modem connecting to Ericsson 5G Radio Access Network.

“We are very pleased to be part of the team helping to bring 5G to commercial reality in Italy in 2019 and also to realizing the vision of the Hub. 5G is so much more than new devices and smartphones and it will provide significant growth opportunities in new sectors. The Hub provides TIM with a strong platform to leverage the benefits of 5G to a whole host of new customers and industries.”

We’ll leave it at that for now, but we shot a bunch of video interviews while we were there so keep an eye out for those in the coming days. We can also recommend the Farina Kitchen pizza restaurant, which features a proper wood fired oven and does a very naughty fried pizza starter. Here’s a shot of the 5G call taking place.

TIM video call

FCC sets the rules for third mmWave auction

The FCC has unveiled the rules for the next mmWave auction, set to take place in second half of 2019, for airwaves in the 37 GHz, 39 GHz and 47 GHz spectrum bands.

This will be the third mmWave auction to take place in the US, with the scrap for 28 GHz band spectrum currently underway, and the 24 GHz band auction to follow. While there are numerous different rules which will inevitably lead to squabbling, this is also the second incentive-based auction from the FCC, as the agency looks to promote contiguous blocks of spectrum.

To ensure this is a smooth process the block size will be increased to 100 megahertz across all three spectrum bands, while existing license holders will be afforded the opportunity to ‘rationalise’ their existing holdings. Whether anyone actually chooses to relinquish their assets during this process remains to be seen, though budget has been made available for compensation.

As with most other auctions, this one will take place over two phases. The first will be the pay-to-play section, before moving onto the allocation of specific spectrum.

“Pushing more spectrum into the commercial marketplace is a key component of our 5G FAST plan to maintain American leadership in the next generation of wireless connectivity,” said FCC Chairman Ajit Pai.

“Currently, we’re conducting an auction of 28 GHz band spectrum, to be followed by a 24 GHz band auction. And today, we are taking a critical step towards holding an auction of the Upper 37, 39, and 47 GHz bands in 2019. These and other steps will help us stay ahead of the spectrum curve and allow wireless innovation to thrive on our shores.”

While mmWave has been a very consistent buzzword for the telco industry over the last couple of years, industry lobby group GSMA feels there is a very good reason for this.

In its latest report, the GSMA suggests unlocking the right spectrum for to deliver innovative 5G services across different industry verticals could add $565 billion to global GDP and $152 billion in tax revenue from 2020 to 2034. For the GSMA, it’s not just about faster, bigger and better, but delivering services which the telcos are not able to today. mmWave is of course crucial to ensuring the 5G jigsaw all fits together appropriately.

“The global mobile ecosystem knows how to make spectrum work to deliver a better future,” said Brett Tarnutzer, Head of Spectrum at the GSMA.

“Mobile operators have a history of maximising the impact of our spectrum resources and no one else has done more to transform spectrum allocations into services that are changing people’s lives. Planning spectrum is essential to enable the highest 5G performance and government backing for mmWave mobile spectrum at WRC-19 will unlock the greatest value from 5G deployments for their citizens.”

UK’s slowest street gets 0.14 Mbps, but who’s fault actually is it?

New research from uSwitch suggests the slowest broadband in the UK is 0.14 Mbps, while 5% of the UK are not able to reach 5 Mbps, though this could be their own fault.

With the government aiming to have every available premise on 10 Mbps broadband speeds before too long, news such as this will come as a worry. But, to be fair to the government and and the telcos, there is only so much which can be done. Those who choose to have slow broadband cannot complain when they have slow broadband.

Broadband can be somewhat of a postcode lottery, though the research suggests that 35% of those individuals who are on the slowest streets do have superfast broadband services available to them. There will be a variety of reasons for not connecting their home to a faster broadband line, but there aren’t many people left to blame in this situation. You can lead a horse to water, but you can’t make it drink.

“Recent Ofcom research has found that the average household is doubling its data consumption every two years, be it watching online video or accessing government services, and so adequate broadband is swiftly becoming vital,” said Jeremy Chelot, CEO of Community Fibre.

The slowest street for broadband across the UK was Greenmeadows Park in Bamfurlong, Gloucestershire, though this is one of the streets which did not have access to superfast broadband. Poplar Avenue in Oldham and Chesham Road in Wilmslow collect second and third place at the slow end of the table, with respective speeds of 0.221 Mbps and 0.249 Mbps, but in both of these cases superfast broadband is available.

Looking at the prices, for Poplar Avenue customers could get a Onestream Fibre Broadband deal, offering speeds of 38 Mbps for £19.95 per month, while TalkTalk offers 36 Mbps for £22.50 per month. The same deals are available in Chesham Road along with a host of others. If these prices are too high, there were also several other, lower priced, options for 11 Mbps contracts.

The telcos and government are clearly not blameless in many situations where connectivity is poor, but in some cases you have to question what more can be done. The service is available and affordable, but the residents are not plugging in.

Europe is losing in the race to secure digital riches – DT CEO

Despite politicians around the world declaring the importance of technology and insisting their nation is one of the world leaders in digital, Deutsche Telekom CEO Tim Hottges does not believe Europe is competing with the US and Asia.

This might seem like somewhat of a bold statement, but it is entirely true. The US, led by the internet players of Silicon Valley, have dominated the consumer technology world, while the China and Japan’s heavyweight industries have conquered the industrialised segments. Europe might have a few shining lights but is largely left to collect the scraps when the bigger boys are done feasting on the bonanza.

“Europe lost the first half of the digitalisation battle,” said Hottges, speaking at Orange’s Show Hello. “The second half of the battle is about data, the cloud and the AI-based services.”

In all fairness to the continent, there has been the odd glimmer of hope. Spotify emerged from Sweden, Google’s Deepmind was spun-out of Oxford University, while Nokia and Ericsson are reconfirming their place in the world. There is occasionally the odd suggestion Europe has the potential to offer something to the global technology conversation.

What has been achieved so far cannot be undone. The US and Asia are dominant in the technology world and Europe will have to accept its place in the pecking order. That said, lessons must be learnt to ensure the next wave of opportunity does not pass the continent by. A new world order is being written as we speak, and it is being written in binary.

If Europe is to generate momentum through the AI-orientated economy, it will have to bolster the workforce, create the right regulatory landscape (a common moan from the DT boss), but also make sure the raw materials are available. If data is cash, Europeans are paupers.

As it stands, less than 4% of the world’s data is stored in the European market, according to Hottges. This is the raw material required to create and train complex, AI-driven algorithms and business models. If European data is constantly being exported to other continents, other companies and economies will feel the benefits. More of an effort needs to be made to ensure the right conditions are in place to succeed.

Conveniently, the data collected through Orange’s and DT’s new smart speaker ecosystem will be retained within the borders of the European Union. There need to be more examples like this, forcing partners to comply with data residency requirements, as opposed to taking the easy route and whisking information off to far away corners of the world.

Another interesting statistic to consider is the number of qualified developers in Europe. Recent research from Atomico claims there are currently 5.7 million developers across the continent, up 200,000 over the last 12 months, compared to 4.4 million in the US. Everyone talks about the skills gap, though it seems Europe is in a better position than the US if you look at the number of professional developers alone.

Europe has lost the first skirmishes of the digital economy, and to be fair, the fight wasn’t even close. However, the cloud-oriented, intelligent world of tomorrow offers plenty more opportunities.