Samsung unveils its first 5G integrated chipset for smartphones

Samsung Electronics introduced Exynos 980, its first 5G integrated mobile chipset for the mainstream market. Mass production will start by the end of the year.

Samsung’s 5G devices have so far been using separate modem and APE solutions, including its own Exynos 9820 and Qualcomm’s Snapdragon 855 chipsets teamed up with the Exynos 5100 and Snapdragon X50 modems. The new 5G integrated chipset announced today is Samsung’s first. With an 8nm footprint, the chipset combines the 5G modem and APE processors using 8nm FinFET process.

“With the introduction of our 5G modem last year, Samsung has been driving in the 5G revolution and paved the way towards the next step in mobility,” said Ben Hur, VP of System LSI marketing at Samsung Electronics. “With the 5G-integrated Exynos 980, Samsung is pushing to make 5G more accessible to a wider range of users and continues to lead innovation in the mobile 5G market.”

The chipset’s key specifications include:

  • Modem: supports 5G NR Sub-6GHz with max 2.55Gbps downlink and 1.28Gbps uplink speeds. It is also backward compatible with LTE, 3G, and 2G.
  • CPU: one 2.2GHz Dual-core based on Cortex-A77, and one set of 1.8GHz Hexa-core based on Cortex-A55. It may be worth noting that Samsung’s high-end Exynos 9820 can go up to a max speed of 2.73 GHz.
  • Camera support: single-camera up to 108Mp, or dual-camera 20MP+20MP. Samsung also stresses the integrated AI capability to support photo taking.
  • Video support: 4K UHD 120fps encoding and decoding with HEVC(H.265), H.264, VP9

Samsung said in the announcement that the mass production of Exynos 980 is expected to start by the end of this year, indicating Samsung 5G smartphones and tablets based on this new chipset will hit the market in the first half of 2020, if not the first quarter.

One day earlier, Samsung announced Galaxy A90 5G, a mid-range 5G smartphone, based on Qualcomm’s Snapdragon 855 platform, which is aimed at taking 5G to the mainstream users. The new Exynos 980 is likely to power the next generation of mid-range devices.

The 5G momentum in South Korea, Samsung’s home market, has been going strong. After registering 1 million subscribers by the beginning of June, government data showed that by the end of July the total number of 5G subscribers, from all three operators combined, already topped 2 million.

Here is Exynos 980’s promotion video:

 

Losing face in seconds: the app takes deepfakes to a new depth

Zao, a new mobile app coming out of China, can replace characters in TV or movie clips with the user’s own facial picture within seconds, raising new privacy and fraud concerns.

Developed by Momo, the company behind Tantan, China’s answer to Tinder, Zao went viral shortly after it was made available on the iOS App Store in China, Japan, India, Korea, and a couple of other Asian markets. It allows users to swap a character in a video clip for the user’s own face. The user would choose a character in a clip from the selections, often iconic Hollywood movies or popular TV programs, upload his or her own picture to be used, and let the app do the swapping in the cloud. In about eight seconds the swap is done, and the user can share the altered clip on social media.

While many are enjoying the quirkiness of the app, others have raised concerns. First there is the concern for privacy. Before a user can upload their pictures to have the app do the swapping, they have to log in with their phone number and email address, literally losing face and giving away identification to the app. More worryingly, the app, in its earlier version of terms and conditions would assume the full rights to the altered videos, therefore the rights to the users’ images.

Another concern is fraud. Facial recognition is used extensively in China, in benign and not so benign circumstances alike. In this case, when an altered video with the user’s face in it is shared on social networks, it is out of the user’s control and will be open to abuse by belligerent parties. One of such possible abuses will be payment. Alipay, the online and mobile payment system of Alibaba, has enabled retail check-out with face, that is, the customer only needs to look at the camera when leaving the retailer, and the bill will be placed on the users’ Alipay account. By adding a bit fun into the process, check-out by face not only facilitates retail transactions but also continuously enriches Alibaba’s database. (It would not be a complete surprise if this should be one reason behind the euphoria towards AI voice by Jack Ma, Alibaba’s founder.) The payment platform rushed to reassure its users that the system will not be tricked by the images on Zao, without sharing details on how.

Though Zao is not the first AI-powered deepfake application, it is one of the best worked out, therefore most unsettling ones. In another recent case, involving voice simulation and the controversial scholar Jordan Peterson, an AI-powered voice simulator enabled users to type out sentences up to 280 characters for the tool to read out loud in the distinct, uncannily accurate Jordan Peterson voice. This led Peterson to call for a wide-ranging legislation to protect the “sanctity of your voice, and your image.” He called the stealing of other people’s voice a “genuinely criminal act, regardless (perhaps) of intent.”

One can only imagine the impact of seamless image doctoring coupled with flawless voice simulation on all aspects of life, not the least on the already abated trust in news.

The good news is that the Zao developer is responding to users’ concerns. The app said on its official Weibo account (China’s answer to Twitter) that they understood the concerns about privacy and are thinking about how to fix the issues, but “please give us a little time”. The app’s T&C has been updated following the outcry. Now the app would only use the uploaded data for app improvement purposes. Once the user deletes the video from the app, it will also be deleted in the cloud.

Zao Weibo

Ma vs. Musk – is AI boom or doom?

Jack Ma and Elon Musk recently debated the future of AI, with one believing AI will bring humans more freedom from the menial tasks, and another seeing in AI the end of human beings in the current shape.

Jack Ma, the founder and former head of Alibaba and now the Co-Chair of the UN High-Level Panel on Digital Cooperation, and Elon Musk, the founder of Tesla, SpaceX and other ventures, took to the stage at the World Artificial Intelligence Conference (WAIC), currently being held in Shanghai, China, to debate the virtue and vice of AI.

The dialogue, unmoderated, sometimes felt awkward, when the two looked to struggle to find a common anchor point. (The full dialogue is embedded at the bottom of this report.) But there were also agreements occasionally, for example both agreed AI will displace many jobs. However, the two entrepreneurs took very different views on the role AI can ultimately play, especially when it comes to its impact on the future of mankind. Ma took a rather utopian view, claiming AI can help human beings understand and take care of ourselves. He conceded that lots of the jobs many people are doing now will be lost to AI, but he saw that as a positive thing, because “I think people should work three days a week, four hours a day.” There were also a few throwaway claims, like “In the artificial intelligence period, people can live 120 years,” therefore “we need artificial intelligence for the robots to take care of the old guys.”

Musk took a much darker view on AI. He believed the ascendency of AI, with its much higher “bandwidth” than human brains (“a few hundred bits per second, basically, maybe a few kilobits per second, if you’re going to be generous), will render human jobs “pointless” and ultimately take over everything. “Probably the last job that will remain will be writing AI, and then eventually, the AI will just write its own software,” Musk predicted.

It has to be pointed out that both men have invested interest in the topic and the viewpoints they took reflected their interests. Ma, despite that he had stepped down from the CEO’s position, would not be able to dissociate him from Alibaba. His quip at the beginning of the dialogue that “I would like AI to mean Alibaba Intelligence” certainly did not help the perception that he is detached from the business. Alibaba is one of the world’s heaviest user of AI both in e-commerce and increasingly in its cloud computing business – the company acquired Whale Cloud from ZTE to dovetail with its own Alibaba Cloud to serve different clients. Additionally, AI was supposed to play an important role in making lending decisions by Ant Financial, an Alibaba affiliated company, but it was reported earlier the system has not been reliable enough.

Musk’s interest in AI, and its link to the views expressed at the conference, is more complex. He founded OpenAI, a research company, but had decided “to part ways on good terms” with it in 2016. Tesla, and the autonomous car market in general, will increasingly use AI. But more recently he has been directly involved in brain-machine interface (BMI) with his new venture NeuraLink. In July the company applied to US regulators to start trialling its probe device on humans. The flexible threads, thinner than a human hair but connected with over 3,000 electrodes and able to monitor the activity of 1,000 neurons, could connect specific areas of the brain to computers. The first target was to provide AI support to paralyzed patients.

So, there was little surprise when Musk advocated connecting the low bandwidth human brains to the computers that “can easily communicate at a terabit level”, so that human beings could “go along for the ride with AI”, or what he called “symbiosis with artificial intelligence” when he was introducing the new NeuraLink technology earlier. He saw in AI a future where AI will be able to “completely simulate a person in every way possible.” He even went philosophical at WAIC by saying “there’s a strong argument, we’re in the simulation right now.”

(This “strong argument” is not actually new. Chuang-tzu, one of the Taoism masters from 4th century BC, famous stated “Now I do not know whether I was then a man dreaming I was a butterfly, or whether I am now a butterfly, dreaming I am a man”. There are at least two counterarguments to refute this speculation. One is it cannot go through the Popperian test, that is the argument cannot be falsified. Another is there are simpler answers to address the nature of the question, with lower level of entropy, therefore should always be preferred. Both arguments have been extensively explored by Prof David Deutsch, the quantum physicist, in his 1997 book “The Fabric of Reality”.)

Incidentally, Elon Musk recently endorsed Andrew Yang for the 2020 presidential election. Yang, the entrepreneur-turned candidate, champions universal basic income, arguing such a measure would provide the basic safety in the face of massive job losses to AI. According to the research quoted by Yang, among the most vulnerable groups would be the truck drivers and retail cashiers which are generating the biggest number of jobs in America nowadays. It would be very hard to retrain these people quick enough to handle AI-powered positions. In that sense, Jack Ma’s claims that “don’t worry about the machines” and “we will have jobs” may be too optimistic.

Huawei claims AI leadership with launch of Ascend 910 chip and MindSpore

Networking giant Huawei reckons the new Ascend 910 is the world’s most powerful AI processor.

The chip was launched alongside ‘an all-scenario AI computing framework’ called MindSpore at an event positioned as the realisation of the AI strategy announced in October of last year. “Everything is moving forward according to plan, from R&D to product launch,” said Huawei Rotating Chairman Eric Xu. “We promised a full-stack, all-scenario AI portfolio. And today we delivered, with the release of Ascend 910 and MindSpore. This also marks a new stage in Huawei’s AI strategy.”

Huawei chucked around a few datapoints involving things like Teraflops, to support its claim that the Ascend 910 kicks AI ass. It also consumes around 10% less power than Huawei had previously expected it to. “Ascend 910 performs much better than we expected,” said Xu. “Without a doubt, it has more computing power than any other AI processor in the world.”

MindSpore is not the omniscient, Skynet-like AI platform implied by the slightly creepy name, but an AI development platform. Among its priorities are flexibility, security and privacy protection and it’s designed to be used to develop AI stuff across both devices and the cloud.

For obvious reasons anything Huawei announces these days features liberal references to the importance of security and privacy. “MindSpore will go open source in the first quarter of 2020,” said Xu. “We want to drive broader AI adoption and help developers do what they do best.”

At the same event Xu reportedly addressed the impact of all the US aggro on its bottom line. Referring specifically to the consumer business unit Xu said he’s optimistic it won’t be as badly affected as previously feared, but that the impact of US sanctions could still be as much as $10 billion in revenue.

Europe set to join the facial recognition debate

With more authorities demonstrating they cannot be trusted to act responsibly or transparently, the European Commission is reportedly on the verge of putting the reigns on facial recognition.

According to reports in The Financial Times, the European Commission is considering imposing new rules which would extend consumer rights to include facial recognition technologies. The move is part of a greater upheaval to address the ethical and responsible use of artificial intelligence in today’s digital society.

Across the world, police forces and intelligence agencies are imposing technologies which pose a significant risk of abuse without public consultation or processes to create accountability or justification. There are of course certain nations who do not care about privacy rights of citizens, though when you see the technology being implemented for surveillance purposes in the likes of the US, UK and Sweden, states where such rights are supposedly sacred, the line starts to be blurry.

The reasoning behind the implementation of facial recognition in surveillance networks is irrelevant; without public consultation and transparency, these police forces, agencies, public sector authorities and private companies are completely disregarding the citizens right to privacy.

These citizens might well support such initiatives, electing for greater security or consumer benefits over the right to privacy, but they have the right to be asked.

What is worth noting, is that this technology can be a driver for positive change in the world when implemented and managed correctly. Facial scanners are speeding up the immigration process in airports, while Telia is trialling a payment system using facial recognition in Finland. When deployed with consideration and the right processes, there are many benefits to be realised.

The European Commission has not confirmed or denied the reports to Telecoms.com, though it did reaffirm its on-going position on artificial intelligence during a press conference yesterday.

“In June, the high-level expert group on artificial intelligence, which was appointed by the Commission, presented the first policy recommendations and ethics guidelines on AI,” spokesperson Natasha Bertaud said during the afternoon briefing. “These are currently being tested and going forward the Commission will decide on any future steps in-light of this process which remains on-going.”

The Commission does not comment on leaked documents and memos, though reading between the lines, it is on the agenda. One of the points the 52-person expert group will address over the coming months is building trust in artificial intelligence, while one of the seven principles presented for consultation concerns privacy.

On the privacy side, parties implementing these technologies must ensure data ‘will not be used to unlawfully or unfairly discriminate’, as well as setting systems in place to dictate who can access the data. We suspect that in the rush to trial and deploy technology such as facial recognition, few systems and processes to drive accountability and justification have been put in place.

Although these points do not necessarily cover the right for the citizen to decide, tracking and profiling are areas where the group has recommended the European Commission consider adding more regulation to protect against abuses and irresponsible deployment or management of the technology.

Once again, the grey areas are being exploited.

As there are only so many bodies in the European Commission or working for national regulators, and technology is advancing so quickly, there is often a void in the rules governing the newly emerging segments. Artificial intelligence, surveillance and facial recognition certainly fall into this chasm, creating a digital wild-west landscape where those who do not understand the ‘law of unintended consequence’ play around with new toys.

In the UK, it was unveiled several private property owners and museums were using the technology for surveillance without telling consumers. Even more worryingly, some of this data has been shared with police forces. Information Commissioner Elizabeth Denham has already stated her agency will be looking into the deployments and will attempt to rectify the situation.

Prior to this revelation, a report from the Human Rights, Big Data & Technology Project attacked a trial from the London Metropolitan Police Force, suggesting it could be found to be illegal should it be challenged in court. The South Wales Police Force has also found itself in hot water after it was found its own trials saw only an 8% success rate.

Over in Sweden, the data protection regulator used powers granted by GDPR to fine a school which had been using facial recognition to monitor attendance of pupils. The school claimed they had received consent from the students, but as they are in a dependent position, this was not deemed satisfactory. The school was also found to have substandard processes when handling the data.

Finally, in the US, Facebook is going to find itself in court once again, this time over the implementation of facial recognition software in 2010. A class-action lawsuit has been brought against the social media giant, suggesting the use of the technology was non-compliant under the Illinois Biometric Information Privacy Act.

This is one example where law makers have been very effective in getting ahead of trends. The law in question was enacted in 2008 and demanded companies gain consent before any facial recognition technologies are introduced. This is an Act which should be applauded for its foresight.

The speed in which progress is being made with facial recognition in the surveillance world is incredibly worrying. Private and public parties have an obligation to consider the impact on the human right to privacy, though much distaste has been shown to these principles in recent months. Perhaps it is more ignorance, short-sightedness or a lack of competence, but without rules to govern this segment, the unintended consequences could be compounded years down the line.

Another point worth noting is the gathering momentum to stop the wrongful implementation of facial recognition. Aside from Big Brother Watch raising concerns in the UK, the City of San Francisco is attempting to implement an approval function for police forces, while Google is facing an internal rebellion. Last week, it emerged several hundred employees had signed a petition refusing to work on any projects which would aid the government in tracking citizens through facial recognition surveillance.

Although the European Commission has not confirmed or denied the report, we suspect (or at the very least hope) work is being taken on to address this area. Facial recognition needs rules, or we will find ourselves in a very difficult position, similar to today.

A lack of action surrounding fake news, online bullying, cybersecurity, supply chain diversity and resilience, or the consolidation of power in the hands of a few has created some difficult situations around the world. Now the Commission and national governments are finding it difficult to claw back the progress of technology. This is one area where the European Commission desperately needs to get ahead of the technology industry; the risk and consequence of abuse is far too great.

Amazon has managed to bottle fear, but recognition debate remains

While facial recognition technologies are becoming increasingly controversial, it is always worth paying homage to innovation in this field and the real-world applications, when deployed responsibly.

We suspect people aren’t necessarily objecting to the concept of facial recognition technologies, but more to the application and lack of public consultation. You only have to look at some of world’s less appetizing governments to see the negative implications to privacy and human rights, but there are of course some significant benefits should it be applied in an ethically sound and transparent manner.

Over in the AWS labs, engineers have managed to do something quite remarkable; they have managed to bottle the concept of fear and teach its AI programmes to recognise it.

“Amazon Rekognition provides a comprehensive set of face detection, analysis, and recognition features for image and video analysis,” the company stated on its blog. “Today, we are launching accuracy and functionality improvements to our face analysis features.

“With this release, we have further improved the accuracy of gender identification. In addition, we have improved accuracy for emotion detection (for all 7 emotions: Happy, Sad, Angry, Surprised, Disgusted, Calm and Confused) and added a new emotion: Fear.”

When applied correctly, these technologies have an incredibly power to help society. You only have to think about some of the atrocities which have plagued major cities, but also the on-going problems. Human eyes can only see so much, with police and security forces often relying on reports from the general public. With cameras able to recognise emotions such as fear, crimes could be identified while they are taking process, allowing speedier reactions from the relevant parties.

However, there are of course significant risks with the application of this technology. We have seen in China such programmes are being used to track certain individuals and races, while certain forces and agencies in the US are constantly rumoured to be considering the implementation of AI for facial recognition, profiling and tracking of individuals. Some of these projects are incredibly worrying, and a violation of privacy rights granted to the general public.

This is where governments are betraying the promise they have made to the general public. Rules and regulations have not been written for such technologies, therefore the agencies and forces involved are acting in a monstrously large grey area. There of course need to be rules in place to govern surveillance practices, but a public conversation should be considered imperative.

Any time the right to privacy is being compromised, irrelevant as to whether there are noble goals in mind, the public should be consulted. The voters should choose whether they are happy to sacrifice certain privacy rights and freedoms in the pursuit of safety. This is what transparency means and this is exactly what has been disregarded to date.

Microsoft has also been a member of the eavesdropping gang – report

Microsoft contractors have been listening to Skype and Cortana conversations without the full knowledge and consent of the apps’ users, claims a report.

We were almost immediately proved wrong when we said Microsoft, in comparison with Apple, Google, and Amazon, “fortunately has not suffered high profile embarrassment” by its voice assistant Cortana. Motherboard, part of the media outlet Vice, reported that Microsoft contractors, some of them working from home, have been listening to some Skype calls using the app’s instant translation feature, as well as users’ interactions with the Cortana.

Motherboard has acquired audio clips, screenshots as well as internal documents to show that Microsoft, just as its peers, have been employing humans to constantly improve the software algorithm and the quality and accuracy of the translations and responses. Also similar to the other leading tech companies that run voice assistants, Microsoft is ambiguous in its consumer communication, lax in its policy implementation, and does not give the users a way to opt out.

“The fact that I can even share some of this with you shows how lax things are in terms of protecting user data,” the Microsoft contractor turned whistle-blower, who supplied the evidence and decided to remain anonymous, told Motherboard.

“Microsoft collects voice data to provide and improve voice-enabled services like search, voice commands, dictation or translation services,” Microsoft said a statement sent to Motherboard. “We strive to be transparent about our collection and use of voice data to ensure customers can make informed choices about when and how their voice data is used. Microsoft gets customers’ permission before collecting and using their voice data.”

“Skype Translator Privacy FAQ” states that “Voice conversations are only recorded when translation features are selected by a user.” It then goes on to guide users how to turn off the translation feature. There is no possibility for a customer to use the translation service without having the conversation recorded. Neither does the official document say the recorded conversations may be listened to by another human.

Due to the “gig economy” nature of the job, some contractors work from home when undertaking the tasks to correct translations or improve Cortana’s response quality. This is also made obvious by Microsoft contractors’ job listings. However, the content they deal with can be sensitive, from conversations between people in an intimate relationship, to health status and home addresses, as well as query records on Cortana. “While I don’t know exactly what one could do with this information, it seems odd to me that it isn’t being handled in a more controlled environment,” the whistle-blower contractor told Motherboard.

The report does not specify where the eavesdropping they uncovered took place, but the line in the Microsoft statement that “We … require that vendors meet the high privacy standards set out in European law” can’t help but raise some suspicion that the practice could run afoul of GDPR, the European Union’s privacy protection regulation.

At the time of writing, Microsoft has not announced a suspension the practice.

Apple and Google suspend some of their eavesdropping

Two of the world’s leading voice assistant makers pulled the plug on their respective analytics programmes of Siri and Google Assistant after private information including confidential conversations were leaked.

Apple decided to suspend its outsourced programme to “grade” Siri, by which it assesses the voice assistant’s response accuracy, following reports that private conversations are being listened to by its contractors without the users’ explicit consent. The company committed to add an opt-out option for users in a future update of Siri. It also promised that the programme would not be restarted until it had completed a thorough review.

“We are committed to delivering a great Siri experience while protecting user privacy. While we conduct a thorough review, we are suspending Siri grading globally,” the Cupertino-based iPhone maker told The Guardian. “Additionally, as part of a future software update, users will have the ability to choose to participate in grading.”

This is in response to the leak that was first reported by the British broadsheet, which received tipoff from whistle-blowers. The paper learned that contractors regularly hear private conversations ranging from dialogues between patients and doctors, to communications between drug dealers and buyers, with everything is between. These could include cases when Siri has triggered unintentionally without the users’ awareness.

The biggest problem with Apple’s analytics programme is that it does not explicitly disclose to consumers that some of Siri recordings are shared with contractors in different parts of the world who will listen to the anonymous content, as a means to improve Siri’s accuracy. By not being upfront, Apple does not provide users with the option to opt out either.

Shortly before Apple’s decision to call a halt to Siri grading, Google also pulled the plug on its own human analysis of Google Assistant in the European Union, reported Associated Press. The company promised to the office of Johannes Caspar, Hamburg’s commissioner for data protection and Germany’s lead regulator of Google on privacy issues, that the suspension will last at least three months.

The decision was made after Google admitted that one of the language reviewers it partners with, who are supposed to assess Google Assistant’s response accuracy, “has violated our data security policies by leaking confidential Dutch audio data.” Over 1,000 private conversations in Flemish, some of which included private data, were sent to the Belgian news outlet VRT. Though the messages are supposed to be anonymised, staff at VRT were able to identify the users through private information like home addresses.

At that time Google promised “we will take action. We are conducting a full review of our safeguards in this space to prevent misconduct like this from happening again.”

These are not the first cases where private conversations are leaked over voice assistants. Last year an Alexa-equipped Amazon Echo recorded a conversation between a couple in Portland, Oregan, and sent it to a friend, which was another recent case that rang the alarm bell of private data security.

It should not surprise those in the tech world that AI powered natural language processing software still has a long way to go before it can get all the intricacies right. Before that it needs human input to continuously improve the accuracy. The problems that bedevilled Google and Apple today, and Amazon in the past, and Microsoft (Cortana) which fortunately has not suffered high profile embarrassment recently, are down to the lack of stringent oversight of the role humans play, the lack of clear communication to consumers that their interactions with voice assistants may be used for data analysis purposes, and the failure to give consumers the choice to opt out.

There is also the controversy of data sovereignty, as well as the question of whether private data should be allowed to be stored in the cloud or should be kept on device. Apple’s leak case is not geographically specified, but Google’s case is a clear violation of GDPR.  According to the AP report, Germany has already started proceedings against Google.

Facebook is reading minds while Amazon perfects text-to-speech

A Facebook-funded study has achieved a breakthrough in decoding speech directly from brain signals at the same time as AWS has made automated speech more realistic.

The study funded by the creepily-named Facebook Reality Labs was conducted by San Francisco University. Its findings were published yesterday under the heading ‘Real-time decoding of question-and-answer speech dialogue using human cortical activity’. It claims to have achieved breakthroughs in the accuracy of identifying speech from the electrical impulses in people’s brains.

The clever bit doesn’t seem to have anything to do with the actual reading of these impulses, but in using algorithms and context to narrow down the range of possible sounds attributable to a given piece of brain activity. This helps distinguish between words comprised of similar sets of sounds and thus improve accuracy, with a key piece of context being the question asked. Thus this breakthrough is as much about AI and machine learning as anything else.

At the same time Amazon Web Services (AWS) has announced a new feature of its Polly text-to-speech managed service. The specific announcement is relatively minor – the ability to give the resulting speech a newsreader style of delivery – but it marks a milestone in the journey to make machine-generated speech as realistic as possible.

When you combine the potential of these two developments, two eventualities spring to mind. The first is an effected cure for muteness without the need for interfaces such as keyboards, which would be amazing. The second is somewhat more ominous, which is a world in which we can no longer be sure we’re communicating with an actual human being unless we’re face-to-face with them.

The AWS post makes joking reference to HAL 9000 from the film 2001: A Space Odyssey, but thanks in part to its own efforts and those funded by Facebook, that sort of thing is looking less like science fiction and more like science fact with every passing day.

 

Do you have some clear ideas about how the edge computing sector is developing? Then please complete the short survey being run by our colleagues at the Edge Computing Congress and Telecoms.com Intelligence. Click here to see the questions.