Making Sense of the Telco Cloud

In recent years the cloudification of communication networks, or “telco cloud” has become a byword for telecom modernisation. This Telecoms.com Intelligence Monthly Briefing aims to analyse what telcos’ transition to cloud means to the stakeholders in the telecom and cloud ecosystems. Before exploring the nooks and crannies of telco cloud, however, it is worthwhile first taking an elevated view of cloud native in general. On one hand, telco cloud is a subset of the overall cloud native landscape, on the other, telco cloud almost sounds an oxymoron. Telecom operator’s monolithic networks and cloud architecture are often seen as two different species, but such impressions are wrong.

(Here we are sharing the opening section of this Telecoms.com Intelligence special briefing to look into how telco cloud has changing both the industry landscape and operator strategies.

The full version of the report is available for free to download here.)

What cloud native is, and why we need it

“Cloud native” have been buzz words for a couple of years though often, like with many other buzz words, different people mean many different things when they use the same term. As the authors of a recently published Microsoft ebook quipped, ask ten colleagues to define cloud native, and there’s good chance you’ll get eight different answers. (Rob Vettor, Steve “ardalis” Smith: Architecting Cloud Native .NET Applications for Azure, preview edition, April 2020)

Here are a couple of “cloud native” definitions that more or less agree with each other, though with different stresses.

The Cloud Native Computing Foundation (CNCF), an industry organisation with over 500 member organisations from different sectors of the industry, defines cloud native as “computing (that) uses an open source software stack to deploy applications as microservices, packaging each part into its own container, and dynamically orchestrating those containers to optimize resource utilization.”

Gabriel Brown, an analyst from Heavy Reading, has a largely similar definition for cloud native, though he puts it more succinctly. For him, cloud native means “containerized micro-services deployed on bare metal and managed by Kubernetes”, the de facto standard of container management.

Although cloud native has a strong inclination towards containers, or containerised services, it is not just about containers. An important element of cloud native computing is in its deployment mode using DevOps. This is duly stressed by Omdia, a research firm, which prescribes cloud native as “the first foundation is to use agile methodologies in development, building on this with DevOps adoption across IT and, ideally, in the organization as well, and using microservices software architecture, with deployment on the cloud (wherever it is, on-premises or public).”

Some would argue the continuous nature of DevOps is as important to cloud native as the infrastructure and containerised services. Red Hat, an IBM subsidiary and one of the leading cloud native vendors and champions for DevOps practices, sees cloud native in a number of common themes including “heavily virtualized, software-defined, highly resilient infrastructure, allowing telcos to add services more quickly and centrally manage their resources.”

These themes are aligned with the understanding of cloud native by Telecoms.com Intelligence, and this report will discuss cloud native and telco cloud along this line. (A full Q&A with Azhar Sayeed, Chief Architect, Service Provider at Red Hat can be found at the end of this report).

The main benefits of cloud native computing are speed, agility, and scalability. As CNCF spells it out, “cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach. These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.”

To adapt such thinking to the telecom industry, the gains from migrating to cloud native are primarily a reflection of, and driven by, the increasing convergence between network and IT domains. The first candidate domain that cloud technology can vastly improve on, and to a certain degree replace the heavy infrastructure, is the support for the telcos’ own IT systems, including the network facing Operational Support Systems and customer facing Business Support System (OSS and BSS).

But IT cloud alone is far from what telcos can benefit from the migration to cloud native. The rest of this report will discuss how telcos can and do embark on the journey to cloud native, as a means to deliver true business benefits through improved speed, agility, and scalability to their own networks and their customers.

The rest of the report include these sections:

  • The many stratifications of telco cloud
  • Clouds gathering on telcos
  • What we can expect to see on the telco cloud skyline
  • Telco cloud openness leads to agility and savings — Q&A with Azhar Sayeed, Chief Architect, Service Provider, Red Hat
  • Additional Resources

The full version of the report is available for free to download here.

Finland joins the quest for quantum computing strengths

The Technical Research Centre of Finland is going to build the country’s first quantum computer, joining a growing European contingent to compete at the front of next generation computing technology.

VTT, Finland’s state-owned Technical Research Centre (Teknologian tutkimuskeskus VTT Oy) announced that it will design and build the country’s first quantum computer, in partnership with “progressive Finnish companies from a variety of sectors”, aiming to “bolster Finland’s and Europe’s competitiveness” in this cutting-edge technology.

“In the future, we’ll encounter challenges that cannot be met using current methods. Quantum computing will play an important role in solving these kinds of problems,” said Antti Vasara, CEO of VTT. Referring to the country’s challenge of post-COVID-19 recover, Vasara said “it’s now even more important than ever to make investments in innovation and future technologies that will create demand for Finnish companies’ products and services.”

The multi-year project, with a total cost estimated about €20-25 million, will run in phases. The first checkpoint will be about a year from now, when VTT targets to “get a minimum five-qubit quantum computer in working order”, it said in the press release. Qubit, or “quantum bit”, is the basic information unit in quantum computing, analogous to binary digit, or “bit”, in classical computing.

In all fairness, this is a modest target on a modest budget. To put the 5-qubit target into perspective, by late last year, Google claimed that its quantum computer had achieve 53-qubit computing power. It could perform a task in 200 seconds that would take Summit, one of IBM’s supercomputers, 2.5 days by IBM’s own admission. By the time of writing, VTT has not responded to Telecoms.com’s question on the project’s ultimate target.

When it comes to budget, the VTT amount is easily dwarfed by the more ambitious projects. Although the most advanced quantum computers in the world are developed and run by the leading American technology companies and academic institutions, for example the MIT, IBM, and Google. But other parts of the world are quickly building their own facilities, including businesses and universities in Japan, India, China, and Europe. One of the high-profile cases recently is IBM’s decision to build Europe’s first commercial quantum computer in German’s state-backed research institute in Fraunhofer, near Stuttgart.

In addition to getting closer to and better serving the European markets in the future, IBM’s decision to build a quantum computer in Europe is also to do with GDPR requirement. While European businesses can use IBM’s quantum computer located in the US, through the cloud, they may hesitate when sending user data outside of the EU. The Fraunhofer project has been personally endorsed by Angela Merkel, the German Chancellor. The federal government has pledged €650 million investment for quantum computing, though not in the Fraunhofer project alone.

When it comes to quantum computing applications in the communications industry, at least two areas it can have strong impact. The first is security. Quantum computing will enable new modes of cryptography. The second is new materials. Daimler, the carmaker, has already used IBM’s quantum computers to design new batteries for its electric cars by simulating the complex molecule level chemistry inside the battery cells. On top of batteries, another research topic in new materials in the communications industry is to find silicon replacement as semiconductor in extremely high radio spectrums.

Despite its modest scope, the VTT undertaking is significant. Not only does it give Finland the right to boast of being the first Nordic country to build its own quantum computer, the success of the project would “provide Finland with an exceptional level of capabilities in both research and technology”. Faced with the worst economic crisis since the collapse of the Soviet Union, the Nordic nation is looking to technology breakthroughs for sustainable revival and long-term competitiveness. Quantum computing capability of this project, if not pursuing supremacy, limited by its scope, may at least give Finland the table stake.

IBM still searching for its place as it targets the edge

IBM’s struggle over the last decade has been well documented, but with a pivot following a pivot, the $34 billion Red Hat acquisition is beginning to make waves.

Today marks the launch of IBM’s Think Digital event, a virtual conference to discuss everything and anything IBM. There will of course be numerous announcements across the extravaganza, but the Red Hat focused boasts are some of the most interesting.

“In today’s uncertain environment, our clients are looking to differentiate themselves by creating more innovative, responsive user experiences that are adaptive and continuously available – from the data centre all the way out to the edge,” said Denis Kennelly, GM of IBM Hybrid Cloud.

“IBM is helping clients unlock the full potential of edge computing and 5G with hybrid multi-cloud offerings that bring together Red Hat OpenShift and our industry expertise to address enterprise needs in a way no other company can.”

Multi-cloud is a term which will become increasingly important over the next few years, as enterprise organisations aim to realise the power of cloud computing, marrying the benefits of ‘best in breed’ with rationalisation projects to improve operational efficiencies. On top of these complex operational challenges, the edge is becoming a much more prominent proposition for all in the ecosystem.

Built on Red Hat OpenShift, IBM will now offer several new services and products to enable companies in this new digital environment. The Edge Application Manager or Network Cloud Manager, for example, take IBM into new segments in the on-going pursuit of relevance.

“IBM’s new version of Edge Application Manager and introduction of Telco Cloud Manager is part of IBM’s hybrid cloud strategy which is now extending through telcos to the edge,” said Nick McQuire, VP of Enterprise Research. “The moves essentially put IBM’s marker down on edge computing which represents a new era of computing outside the data centre and the public cloud.

“With the emergence of 5G and low-latency applications which are acting as accelerators, telcos too must transform so IBM is hoping that its relationships with telcos globally, though Red Hat and its services and arm, will make it better placed than the hyper-scalers to take advantage of this shift.”

As McQuire points out, this is an effort to further differentiate the business, but also evolve the company to ensure it is operating in more sustainable markets in the future. The issue over the last few decades has not only been IBM’s relevance to market trends, but also its ability to compete in the new segments.

In January 2018, IBM reversed a trend which had been haunting the management team. This earnings call offered investors the first period of year-on-year revenue growth for almost six years. Big Blue had been on the decline, but it seemed to be turning around the business with its cloud computing unit and AI proposition Watson leading the charge.

However, the business failed to accelerate around the turned corner.

In the cloud computing segment, IBM failed to keep pace with market leaders, falling off as Amazon Web Services, Microsoft Azure and Google Cloud proved they were in a different league to the rest. And in AI, the segment has not boomed as some might have anticipated, though IBM still has one of the worlds’ leading technologies in Watson.

With these two ventures failing to live up to the lofty promise, although they did push the IBM business back into growth, Red Hat is supposed to offer an alternative play at the enterprise connectivity and IT markets.

What is worth noting is that AWS, Microsoft and Google, as well as other cloud competitors, have made moves into the enterprise edge market as well. With the emergence of 5G, the cloud industry could well be ready to move into the next phase of development, but the question is where does IBM fit in?

IBM has dipped its fingers in numerous pies, but Red Hat is a definitive move ahead of a new surge in the cloud market. Companies don’t spend $34 billion on organisations which are going to supplement offerings, this is another material shift in the IBM operations as it continues to search for its place in the digital ecosystem.

AI and edge computing replaces the Pilgrims in the new Mayflower

IBM’s AI and edge computing technologies are going to guide a crewless boat to chart the same route the Pilgrims did 400 years ago.

I was at an IBM analyst event when I met Don Scott, CTO of Marine Ai, a venture that is working on an automatic boat, named “Mayflower”, that will sail from Plymouth, England, where the Marine Ai is based, to Plymouth, Massachusetts in September this year, 400 years after the original ship carried the Pilgrims across the Atlantic Ocean.

My first question was on why IBM, considering companies like Google would probably have more expertise in autonomous driving. The problem with Google seems to be two-fold. On one hand, Google demands that all new “knowledge” developed from their AI tools should be owned by Google. On the other hand, Google’s AI tools are not transparent enough to satisfy the maritime regulators.

On the other hand, Scott said IBM has responded to his request with enthusiasm. In addition to reversing Google’s position on those two pain points above, IBM is helping develop the boat’s control system on its Power System servers. Meanwhile, other partners in the projects, including the University of Plymouth, one of the world’s leading research institute of marine science, and the non-profit organisation ProMare, are training IBM’s PowerAI engine with real data from the ocean, for example recognising other ships, whales, floating debris.

The boat will be equipped with an edge computing module using the data from the AI engine to make onboard decisions, similar to the way autonomous cars are doing on the road. What is different is that, while autonomous cars are typically always online (it is one of the leading use cases for 5G, for example), connectivity to the internet when the boat sales out to sea will be sporadic at the best. It will use some satellite communication, but the majority of the computing will be done “on the edge”.

The motor power of the boat, which is made of aluminium and composite materials and measures 15 metres by 6 metres, will come from onboard batteries, charge with solar power and back-up biofuel generator. When I asked him what the boat can do in addition to charting ocean geography, Scott said the first mission would include measure the level of microplastic in the sea, which has increasingly become a big concern for those of us that worry about the environment. In the future, similar sea vessels may even be used to clean the ocean.

I was fully aware that Marine Ai was present at the event because it is a showcase for IBM technologies. However, I could not deny that the project fascinated me in its own right. If edge computing and AI, as well as cloud computing and satellite communication, are pushing the boundary of what they can do, this should count as one case.

The 5G hype is for real

The fast rollout of 5G services in different parts of the world has caused many in the telecoms industry to question how much of the 5G hype is for real. Telecoms.com and IBM recently conducted an industry survey that aimed to answer that question with first-hand findings. We trust you will enjoy seeing the results of the survey and reading our analysis of them in this report. Spoiler alert: it’s real, very real.

Please fill in the short form below to receive a copy of this survey report.
    You can withdraw your marketing consent at any time by sending an email to NETSUPP@us.ibm.com. Also you may unsubscribe from receiving marketing emails from IBM by clicking the unsubscribe link in each such email.

    More information on IBM processing of your personal data can be found in the IBM Privacy Statement. By submitting this form, I acknowledge that I have read and understand the IBM Privacy Statement.

Google claims quantum computing breakthrough, IBM disagrees

Google says it has achieved ‘quantum supremacy’, as its Sycamore chip performed a calculation, which would have taken the world’s fastest supercomputer 10,000 years, in 200 seconds.

It seems quite a remarkable upgrade, but this is the potential of quantum computing. This is not a step-change in technology, but a revolution on the horizon.

Here, Google is claiming its 53-qubit computer performed a task in 200 seconds, which would have taken Summit, a supercomputer IBM built for the Department of Energy, 10,000 years. That said, IBM is disputing the claim suggesting Google is massively exaggerating how long it would take Summit to complete the same task. After some tweaks, IBM has said it would take Summit 2.5 days.

Despite the potential for exaggeration, this is still a breakthrough for Google.

For the moment, it seems to be nothing more than a humble brag. Like concept cars at the Tokyo Motor Show, the purpose is to inflate the ego of Google and create a perception of market leadership in the quantum computing world. Although this is an area which could be critically important for the digital economy in years to come, the technology is years away from being commercially viable.

Nonetheless, this is an impressive feat performed by the team. It demonstrates the value of persisting with quantum computing and will have forward-thinking, innovative data scientists around the world dreaming up possible applications of such power.

At the most basic level, quantum computing is a new model of how to build a computer. The original concept is generally attributed to David Deutsch of Oxford University, who at a conference in 1984, pondered the possibility of designing a computer that was based exclusively on quantum rules. After publishing a paper a few months later, which you can see here if you are brave enough, the race to create a quantum computer began.

Today’s ‘classical’ computers store information in binary, where each bit is either on or off. Quantum computation use qubits, which can either be on or off, as well as being both on and off. This might sound incredibly complicated, but the best way to explain is to imagine a sphere.

In classical computing, a bit can be represented by the poles of the sphere, with zero representing the south pole and one representing the north, but in Quantum computing, any point of the sphere can be used to represent any point. This is achieved through a concept called superposition, which means ‘Qbits’ can be represented by a one or a zero, or both at the same time. For example, two qubits in a single superposition could represent four different scenarios.

Irrelevant as to whether you understand the theoretical science behind quantum computing, the important takeaway is that it will allow computers to store, analyse and transfer information much more efficiently. As you can see from the claim Google has made, completing a calculation in 200 seconds as opposed to 10,000 years is a considerable upgrade.

This achieved can be described as ‘quantum supremacy’, in that the chip has enabled a calculation which is realistically impossible on classical computing platforms. From IBM’s perspective, this is a step forward, but not ‘quantum supremacy’ if its computer can complete the same task in 2.5 days.

If this still sounds baffling and overly complex, this is because quantum computing is a field of technology only the tiniest of fractions of the worlds’ population understand. This is cutting-edge science.

“In many ways, the exercise of building a quantum computer is one long lesson in everything we don’t yet understand about the world around us,” said Google CEO Sundar Pichai.

“While the universe operates fundamentally at a quantum level, human beings don’t experience it that way. In fact, many principles of quantum mechanics directly contradict our surface level observations about nature. Yet the properties of quantum mechanics hold enormous potential for computing.”

What is worth taking away here is that understanding the science is not at all important once it has been figured out by people far more intelligent. All normal people need to understand is that this is a technology that will enable significant breakthroughs in the future.

This might sound patronising, but it is not supposed to. Your correspondent does not understand the mechanics of the combustion engine but does understand the journey between London and South Wales is significantly faster by car than on horse.

But what could these breakthroughs actually be?

On the security side, although quantum computing could crack the end-to-end encryption software which is considered unbreakable today, it could theoretically enable the creation of hack-proof replacements.

In artificial intelligence, machine learning is perfect area for quantum computing to be applied. The idea of machine learning is to collect data, analyse said data and provide incremental improvements to the algorithms which are being integrated into software. Analysing the data and applying the lessons learned takes time, which could be dramatically decreased with the introduction of quantum computing.

Looking at the pharmaceutical industry, in order to create new drugs, chemists need to understand the interactions between various molecules, proteins and chemicals to see if medicines will improve cure diseases or introduce dangerous side-effects. Due to the eye-watering number of combinations, this takes an extraordinary amount of time. Quantum computing could significantly reduce the time it takes to understand the interaction but could also be combined with analysing an individual’s genetic make-up to create personalised medicines.

These are three examples of how quantum computing could be applied, but there are dozens more. Weather forecasting could be improved, climate change models could be more accurate, or traffic could be better managed in city centres. As soon as the tools are available, innovators will come up with the ideas of how to best use the technology, probably coming up with solutions to challenges that do not exist today.

Leading this revolutionary approach to computing is incredibly important for any company which wants to dominate the cloud industry in the futuristic digital economy, which is perhaps the reason IBM felt it was necessary to dampen Google’s celebrations.

“Building quantum systems is a feat of science and engineering and benchmarking them is a formidable challenge,” IBM said on its own blog.

“Google’s experiment is an excellent demonstration of the progress in superconducting-based quantum computing, showing state-of-the-art gate fidelities on a 53-qubit device, but it should not be viewed as proof that quantum computers are “supreme” over classical computers.”

Google measured the success of its own quantum computer against IBM’s Summit, a supercomputer which is believed to be the most powerful in the world. By altering the way Summit approaches the same calculation Google used, IBM suggests Summit could come to the same conclusion in 2.5 days rather than 10,000 years.

Google still has the fastest machine, but according to IBM the speed increase does not deserve the title of ‘quantum supremacy’. It might not be practical to ask a computer to process a calculation for 2.5 days, but it is not impossible, therefore the milestone has not been reached.

What is worth noting is that a pinch of salt should be taken with both the Google and IBM claims. These are companies who are attempting to gain the edge and undermine a direct rival. There is probably some truth and exaggeration to both statements made.

And despite this being a remarkable breakthrough for Google, it is of course way too early to get exciting about the applications.

Not only is quantum computing still completely unaffordable for almost every application data scientists are dreaming about today, the calculation was very simple. Drug synthesis or traffic management where every traffic signal is attempting to understand the route of every car in a major city are much more complicated problems.

Scaling these technologies so they are affordable and feasible for commercial applications is still likely to be years away, but as Bill Gates famously stated: “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.”

Age-discrimination lawsuit rumbles on as IBM accused of ‘trendy’ objectives

In an on-going age-discrimination lawsuit, a former Big Blue executive has suggested the firm fired old-timers in pursuit of a ‘trendy’ and ‘cool’ image, so it could compete with the likes of Google and Amazon.

This saga has been quietly building in the background for quite some time. The first complaints were presented to the courts in 2018 by Shannon Liss-Riordan, with the ex-IBM employees suggesting they were culled for no reason aside from the fact they were too old. This is of course a big no-no.

The latest revelation from the courts is from Alan Wild, the former-VP for Human Resources, Employee Relations and Engagement. According to Bloomberg, Wild is suggesting Big Blue has fired between 50,000 and 100,000 employees over the last five years in the pursuit of creating an image which will attract the younger generations.

This is where IBM has suffered over the last couple of years. It might have been one of the most successful technology companies of the 20th century, but it certainly doesn’t maintain this perception in the 21st. When graduates leave MIT or Stanford, first port-of-call is likely to be firms like Google, Facebook, Tesla, Twitter, Amazon, or a host of other forward-looking technology giants who have dominated headlines.

Big Blue might be on the rise currently, but this was only after 23 consecutive quarters of year-on-year revenue decline. This company went through a very painful process of realigning attention from the unpopular, unprofitable and declining legacy operations, through to its future-proof ‘Strategic Imperatives’ unit.

In Wild’s evidence presented to the courts, which are currently sealed documents, IBM began to correct its ‘seniority mix’ in 2014, firing older employees and hiring millennials. Perhaps part of this was to ensure its employees were suitably tooled to tackle the new challenges presented by the digital economy, but if you believe Wild’s alleged comments, the purpose was to have a compounding effect on attracting more fresh graduates into the midst.

According to an extensive investigation launched by Pro Publica last March, IBM had culled 20,000 US employees aged at least 40. This number represents 60% of the total number of total cuts across the period. Although the claims are unconfirmed to date, IBM allegedly told some staff their skills were no-longer needed, before hiring them back as contractors for less cash, and encouraging the older employees to apply for new roles internally, while secretly telling hiring managers not to hire them.

These are very serios accusations from both Pro Publica and Wild, though there are plenty of other testimonies which back up the claims.

In January, Catherine Rodgers presented her own evidence to a court in New York. Rodgers was previously a VP in the Global Engagement Office, while also serving as the most senior executive in Nevada. Rodgers was dismissed from IBM, after almost 30 years of service, in 2017 aged 62.

In Rodgers affidavit, she claimed she raised concerns to Steve Welsh, IBMs GM of Global Technology Services, that the firm was ‘engaging in systematic age discrimination by utilizing several methods to eliminate thousands of its employees’. As part of her role, Rodgers had access to lists of individuals who were being cut as part of ‘Resource Action’ initiatives in her business group, noting that all were over the age of 50 while the younger employees were not impacted whatsoever.

Having spoken to managers in other groups, many of whom had workforces made up of employees from younger generations, the layoffs were not as serious. Considering Rodgers’ division was exceeding targets and running under budget, the request from more senior executives caught her by surprise.

“IBM’s upper management encouraged us to inform the employees who were being laid off that they should use IBM’s internal hiring platform to apply for other jobs within the company, but at the same time, IBM implemented barriers to those employees actually being hired for openings,” Rodgers said in her affidavit.

“For example, if a manager wanted to hire someone who had been laid off, there was a multi-layer approval process. (This multi-layer approval process would not apply if a manager wanted to hire someone who had applied from outside IBM.) Even as Vice President, I could not make such a hire without approval by the Global Resource Board and IBM’s upper management.”

IBM has consistently denied reports it targeted older employees for lay-offs, pointing towards the number of people it hires each year. This statement does not prove innocence by any stretch of the imagination, it simply confirms the firm has been hiring new employees to replace the culled ones. In fairness to IBM, it is not proven the firm has targeted older employees and embarked on a campaign of age-discrimination.

What is worth noting is that the number of IBM employees globally has been decreasing steadily over the last few years. Statista estimates IBM had 350,600 employees at the end of 2018, down from 366,600 in 2017, 380,300 in 2016 and 377,700 in 2015. In 2012, IBM employed 434,250 people across the world. These numbers do not prove or disprove the allegations but are good to bear in mind.

IBM has managed a significant turnaround over the last few years. From irrelevance to a cloud business which is almost dining at the top table. It is threatening to compete with the likes of Amazon, Microsoft, Google and Alibaba, though it still has a lot of work to do. That said, it is in a much more comfortable position than previous years. The question is whether it has managed the process legitimately.

206 days: IBM’s estimate on how long it takes to find a security breach

A new study from IBM suggests it takes 206 days on average for companies to discover a breach and another 73 to fix it.

With cybercriminals becoming savvier and assaults becoming much more complex, it seems many companies will have been exposed for months without even realising it. The average cost to the business could be as much as $3.92 million, with the firm feeling the impact over three-year periods.

“Cybercrime represents big money for cybercriminals, and unfortunately that equates to significant losses for businesses,” said Wendi Whitmore, Global Lead for IBM X-Force Incident Response and Intelligence Services.

“With organizations facing the loss or theft of over 11.7 billion records in the past 3 years alone, companies need to be aware of the full financial impact that a data breach can have on their bottom line – and focus on how they can reduce these costs.”

On average, 67% of the financial impact of security breaches are felt within the first 12 months, 22% is seen in the second year and 11% in the third year after the incident. The long-tail costs are felt more painfully in highly-regulated industries such as healthcare, financial services, energy and pharmaceuticals. Telecoms was not mentioned specifically, but we suspect it will also be among the more impacted industries.

What you have to bear in mind is that this is a security vendor stoking the fire. The dangers of inadequate security in the digital era are very well-known, but you have to take the estimates with a pinch of salt here; it is in the IBM interest for companies to be in heightened states of fear and paranoia.

Looking at the time in which it takes to detect a breach, this is quite a remarkable number and perhaps demonstrates the retrospective approach many firms have taken to security over the last few years. These attitudes are slowly changing, security is moving up the agenda, though this does not compensate for the years of inadequacy.

The IBM report suggests the lifecycle of a breach is 279 days, not accounting for all the regulatory headaches which would follow. That said, those who are able to detect and contain a breach with 200 days are $1.2 million better off when it comes to the financial impact.

Here are a few of the more interesting stats from the report:

  • Data breaches cost companies around $150 per record that was lost or stolen
  • Security automation technologies could potentially half the financial impact of a breach
  • Extensive use of encryption can reduce total cost of a breach by $360,000
  • Breaches originating from a third-party cost companies $370,000 more than average
  • Average cost of a breach in the US is $8.19 million, double the worldwide average
  • Breaches in the healthcare industry are the most expensive
  • Companies with less than 500 employees suffered losses of more than $2.5 million on average

AT&T gets Microsoft and IBM to help with its cloud homework

US telco AT&T has decided it’s time to raise its cloud game and so has entered into strategic partnerships with Microsoft and IBM.

The Microsoft deal focuses on non-network applications and enables AT&T’s broader strategy of migrating most non-network workloads to the public cloud by 2024. The rationale for this is fairly standard: by moving a bunch of stuff to the public cloud AT&T will be able to better focus on its core competences, but let’s see how that plays out.

IBM will be helping AT&T Business Solutions to better provide solutions to businesses. The consulting side will modernize its software and bring it into the IBM cloud, where they will use Red Hat’s platform to manage it all. In return IBM will make AT&T Business its main SDN partner and general networking best mate.

“AT&T and Microsoft are among the most committed companies to fostering technology that serves people,” said John Donovan, CEO of AT&T. “By working together on common efforts around 5G, the cloud, and AI, we will accelerate the speed of innovation and impact for our customers and our communities.”

“AT&T is at the forefront of defining how advances in technology, including 5G and edge computing, will transform every aspect of work and life,” said Satya Nadella, CEO of Microsoft. “The world’s leading companies run on our cloud, and we are delighted that AT&T chose Microsoft to accelerate its innovation. Together, we will apply the power of Azure and Microsoft 365 to transform the way AT&T’s workforce collaborates and to shape the future of media and communications for people everywhere.”

“In AT&T Business, we’re constantly evolving to better serve business customers around the globe by securely connecting them to the digital capabilities they need,” said Thaddeus Arroyo, CEO of AT&T Business. “This includes optimizing our core operations and modernizing our internal business applications to accelerate innovation. Through our collaboration with IBM, we’re adopting open, flexible, cloud technologies, that will ultimately help accelerate our business leadership.”

“Building on IBM’s 20-year relationship with AT&T, today’s agreement is another major step forward in delivering flexibility to AT&T Business so it can provide IBM and its customers with innovative services at a faster pace than ever before,” said Arvind Krishna, SVP, Cloud and Cognitive Software at IBM. “We are proud to collaborate with AT&T Business, provide the scale and performance of our global footprint of cloud data centers, and deliver a common environment on which they can build once and deploy in any one of the appropriate footprints to be faster and more agile.”

Talking of the US cloud scene, the Department of Defense is reportedly looking for someone to provide some kind of Skynet-style ‘war cloud’ in return for chucking them $10 billion of public cash. Formally known as the Joint Enterprise Defense Infrastructure (yes, JEDI), this is designed to secure military and classified information in the event of some kind of catastrophic attach, contribute to cyber warfare efforts and enable the dissemination of military intelligence to the field.

It looks like the gig will be awarded to just one provider, which had led to much jostling for position among the US cloud players. The latest word on the street is that either AWS or Microsoft will get the work, which has prompted considerable moaning from IBM and Oracle and reported concern from President Trump, prompted by politicians apparently repaying their lobbying cash. Here’s a good summary of all that from Subverse.

IBM and Google reportedly swap morals for cash in Chinese surveillance JV

IBM and Google executives should be bracing for impact as the comet of controversy heads directly towards their offices.

Reports have emerged, via the Intercept, suggesting two of the US’ most influential and powerful technology giants have indirectly been assisting the Chinese Government with its campaign of mass-surveillance and censorship. Both will try to distance themselves from the controversy, but this could have a significant impact on both firms.

The drama here is focused around a joint-venture, the OpenPower Foundation, founded in 2013 by Google and IBM, but features members such as Red Hat, Broadcom, Mellanox, Xilinx and Rackspace. The aim of the open-ecosystem organization is to facilitate and share advances in networking, server, data storage, and processing technology.

To date, the group has been little more than another relatively uninteresting NPO, serving a niche in the industry, though one initiative is causing the stir. The OpenPower Foundation has been working with Xilinx and Chinese firm Semptian to create a new breed of chips capable of enabling computers to process incredible amounts of data. This might not seem extraordinary, though the application is where the issue has been found.

On the surface, Semptian is a relatively ordinary Chinese semiconductor business, but when you look at its most profitable division, iNext, the story becomes a lot more sinister. iNext specialises in selling equipment to the Chinese Government to enable the mass-surveillance and censorship projects which have become so infamous.

It will come as little surprise a Chinese firm is aiding the Government with its nefarious objectives, but a link to IBM and Google, as well as a host of other US firms, will have some twitching with discomfort. We can imagine the only people who are pleased at this news are the politicians who are looking to get their faces on TV by theatrically condemning the whole saga.

Let’s start with what iNext actually does before moving onto the US firms involved in the controversy. iNext works with Chinese Government agencies by providing a product called Aegis. Aegis is an interception and analysis system which has been embedded into various phone and internet networks throughout the country. This is one of the products which enables the Chinese Government to have such a close eye on the activities of its citizens.

Documentation acquired by The Intercept outlines the proposition in more detail.

“Aegis is not only the standard interception system but also the powerful analysis system with early warning and timely action capabilities. Aegis can work with all kinds of networks and 3rd party systems, from recovering, analysing, exploring, warning, early warning, locating to capturing. Aegis provides LEA with an end to end solution described as Deep Insight, Early Warning and Timely Action.”

Although the majority of this statement is corporate fluff, it does provide some insight into the way in which the technology actually works. This is an incredibly powerful surveillance system, which is capable of locating individuals through application usernames, IP addresses or phone numbers, as well as accurately tracking the location of said individuals on a real-time basis.

Perhaps one of the most worrying aspect of this system is the ‘pre-crime’ element. Although the idea of predictive analytics in some societies has been met with controversy and considerable resistance, we suspect the Chinese Government does not have the same reservations.

iNext promises this feature can help prevent crime through the introduction of an early warning system. This raises all sorts of ethical questions, as while the data estimates might be accurate to five nines, can you arrest someone when they haven’t actually committed a crime. This is the sticky position Google and IBM might have found itself in.

OpenPower has said that it was not aware of the commercial applications of the projects it manages, while its charter prevents it from getting involved. The objective of the foundation is to facilitate the progress of technology, not to act as judge and jury for its application. It’s a nice little way to keep controversy at arm’s length; inaction and negligence is seen as an appropriate defence plea.

For IBM and Google, who are noted as founding members of the OpenPower Foundation, a stance of ignorance might be enough to satisfy institutions of innocence, but the court of public opinion could swing heavily the other direction. An indirect tie to such nefarious activities is enough for many to pass judgment.

When it comes to IBM, the pursuit of innocence becomes a little bit trickier. IBM is directly mentioned on the Semptian website, suggesting Big Blue has been working closely with the Chinese firm for some time, though the details of this relationship are unknown for the moment.

For any of the US firms which have been mentioned here, it is not a comfortable situation to be in. Although they might be able to plead ignorance, it is quite difficult to believe. These are monstrous multi-national billion-dollar corporations, with hordes of lawyers, some of whom will be tasked with making sure the technology is not being utilised in situations which would get the firm in trouble.

Of course, this is not the first time US technology firms have found themselves on the wrong side of right. There have been numerous protests from employees of the technology giants as to how the technology is being applied in the real-world. Google is a prime example.

In April 2018, Google employees revolted over an initiative the firm was participating in with the US Government. Known as Project Maven, Google’s AI technology was used to improve the accuracy of drone strikes. As you can imagine, the Googlers were not happy at the thought of helping the US Government blow people up. Project Dragonfly was another which brought internal uproar, this time the Googlers were helping to create a version of the Google news app for China which would filter out certain stories which the Government deemed undesirable.

Most of the internet giants will plead their case, suggesting their intentions are only to advance society, but there are numerous examples of contracts and initiatives which contradict this position.

Most developers or engineers, especially the ones who work for a Silicon Valley giant, work for the highest bidder, but there is a moral line few will cross. As we’ve seen before, employees are not happy to aide governments in the business of death, surveillance or censorship, and we suspect the same storyline will play out here.

Google and IBM should be preparing themselves for significant internal and external backlash.