IBM and Google reportedly swap morals for cash in Chinese surveillance JV

IBM and Google executives should be bracing for impact as the comet of controversy heads directly towards their offices.

Reports have emerged, via the Intercept, suggesting two of the US’ most influential and powerful technology giants have indirectly been assisting the Chinese Government with its campaign of mass-surveillance and censorship. Both will try to distance themselves from the controversy, but this could have a significant impact on both firms.

The drama here is focused around a joint-venture, the OpenPower Foundation, founded in 2013 by Google and IBM, but features members such as Red Hat, Broadcom, Mellanox, Xilinx and Rackspace. The aim of the open-ecosystem organization is to facilitate and share advances in networking, server, data storage, and processing technology.

To date, the group has been little more than another relatively uninteresting NPO, serving a niche in the industry, though one initiative is causing the stir. The OpenPower Foundation has been working with Xilinx and Chinese firm Semptian to create a new breed of chips capable of enabling computers to process incredible amounts of data. This might not seem extraordinary, though the application is where the issue has been found.

On the surface, Semptian is a relatively ordinary Chinese semiconductor business, but when you look at its most profitable division, iNext, the story becomes a lot more sinister. iNext specialises in selling equipment to the Chinese Government to enable the mass-surveillance and censorship projects which have become so infamous.

It will come as little surprise a Chinese firm is aiding the Government with its nefarious objectives, but a link to IBM and Google, as well as a host of other US firms, will have some twitching with discomfort. We can imagine the only people who are pleased at this news are the politicians who are looking to get their faces on TV by theatrically condemning the whole saga.

Let’s start with what iNext actually does before moving onto the US firms involved in the controversy. iNext works with Chinese Government agencies by providing a product called Aegis. Aegis is an interception and analysis system which has been embedded into various phone and internet networks throughout the country. This is one of the products which enables the Chinese Government to have such a close eye on the activities of its citizens.

Documentation acquired by The Intercept outlines the proposition in more detail.

“Aegis is not only the standard interception system but also the powerful analysis system with early warning and timely action capabilities. Aegis can work with all kinds of networks and 3rd party systems, from recovering, analysing, exploring, warning, early warning, locating to capturing. Aegis provides LEA with an end to end solution described as Deep Insight, Early Warning and Timely Action.”

Although the majority of this statement is corporate fluff, it does provide some insight into the way in which the technology actually works. This is an incredibly powerful surveillance system, which is capable of locating individuals through application usernames, IP addresses or phone numbers, as well as accurately tracking the location of said individuals on a real-time basis.

Perhaps one of the most worrying aspect of this system is the ‘pre-crime’ element. Although the idea of predictive analytics in some societies has been met with controversy and considerable resistance, we suspect the Chinese Government does not have the same reservations.

iNext promises this feature can help prevent crime through the introduction of an early warning system. This raises all sorts of ethical questions, as while the data estimates might be accurate to five nines, can you arrest someone when they haven’t actually committed a crime. This is the sticky position Google and IBM might have found itself in.

OpenPower has said that it was not aware of the commercial applications of the projects it manages, while its charter prevents it from getting involved. The objective of the foundation is to facilitate the progress of technology, not to act as judge and jury for its application. It’s a nice little way to keep controversy at arm’s length; inaction and negligence is seen as an appropriate defence plea.

For IBM and Google, who are noted as founding members of the OpenPower Foundation, a stance of ignorance might be enough to satisfy institutions of innocence, but the court of public opinion could swing heavily the other direction. An indirect tie to such nefarious activities is enough for many to pass judgment.

When it comes to IBM, the pursuit of innocence becomes a little bit trickier. IBM is directly mentioned on the Semptian website, suggesting Big Blue has been working closely with the Chinese firm for some time, though the details of this relationship are unknown for the moment.

For any of the US firms which have been mentioned here, it is not a comfortable situation to be in. Although they might be able to plead ignorance, it is quite difficult to believe. These are monstrous multi-national billion-dollar corporations, with hordes of lawyers, some of whom will be tasked with making sure the technology is not being utilised in situations which would get the firm in trouble.

Of course, this is not the first time US technology firms have found themselves on the wrong side of right. There have been numerous protests from employees of the technology giants as to how the technology is being applied in the real-world. Google is a prime example.

In April 2018, Google employees revolted over an initiative the firm was participating in with the US Government. Known as Project Maven, Google’s AI technology was used to improve the accuracy of drone strikes. As you can imagine, the Googlers were not happy at the thought of helping the US Government blow people up. Project Dragonfly was another which brought internal uproar, this time the Googlers were helping to create a version of the Google news app for China which would filter out certain stories which the Government deemed undesirable.

Most of the internet giants will plead their case, suggesting their intentions are only to advance society, but there are numerous examples of contracts and initiatives which contradict this position.

Most developers or engineers, especially the ones who work for a Silicon Valley giant, work for the highest bidder, but there is a moral line few will cross. As we’ve seen before, employees are not happy to aide governments in the business of death, surveillance or censorship, and we suspect the same storyline will play out here.

Google and IBM should be preparing themselves for significant internal and external backlash.

US Senators want public disclosures on the value of personal data

Two US Senators have suggested an interesting, if currently very currently ill-defined, idea for companies in the digital economy: list the value of data on the financial spread sheets during earning season.

Senators Mark Warner and Josh Hawley are reportedly readying themselves to introduce the Designing Accounting Safeguards to Help Broaden Oversight and Regulations on Data Act, or DASHBOARD for short. This bill will attempt to force companies into disclosing the financial value of the data which they collect, analyse and action, to the SEC once a quarter.

Although this is an incredibly wide net to cast, the rules would only apply to companies that generate a material impact on revenues from the data and have more than 100 million users. This would also include data which is bundled in through relationships with third-parties.

“…I think we need debates there and enhanced privacy, but we also need a lot more transparency, because if it defaults then to status prerogatives based on how much data is worth, that may spur another debate,” Warner said on ‘Axios on HBO’ this weekend. “But we don’t know any of that right now.”

That is the big issue which Warner is addressing during his prolonged crusade against the tech giant of Silicon Valley; there are still far too many unknowns.

It appears the objective of Warner and Hawley is to create greater understanding of how the digital economy, based on the concept of sharing data, functions. Consumers are seemingly happy to trade away their personal information, but you have to wonder how much of an informed decision this is today.

This is the challenge in addressing a rapidly growing and evolving segment. Not only are we as consumers dealing with challenges for the first time, but so are the regulators and legislators. Rules need to be created which are contextually relevant. Today, the regulatory and legislative landscape is dated, but this looks like one step in the right direction.

Warner and Hawley are seemingly trying to address two issues; firstly, raising awareness and creating a greater understanding of how much information is collected on individuals. And secondly, some more clarity on how much data is actually worth.

The second issue is an interesting one, as there does not seem to be a great level of consistency when it comes to the commercial value of data to an organization. Some might suggest value is more of a nuanced term, with these companies using data sets to improve products, but others have a more direct link. Facebook is a company which directly monetizes user data, suggesting it is worth in the region of $20 a month per user.

As part of the Bill, the SEC will be instructed to develop models to identify the value of the data. There would be several different models, each accounting for different use cases, business models and the vertical segments in which the businesses operate. This might prove to be a difficult aspect of the Bill, as the SEC would have to go on a recruitment drive to hire people capable of understanding the nuances and complexities of the digital economy.

Of course, with every step made with legislation and regulation, you have to take into account the rule of unintended consequences. Once users know the value of their data, they might ask to be compensated for it. This is not what the Bill is intended to do, and the Senators will have to be sure to put concrete protections in place to ensure business models are not undermined.

Although identifying the value of personal information will most likely, and quite rightly, inspire future public debate, lawyers should not be able to hold the companies who monetize personal information to ransom. If users are not happy about the situation, they can close their accounts and ask for personal information to be deleted. You wouldn’t ask for a refund on an umbrella because you found out the manufacturer was making more money than you originally thought using cheaper materials.

Some users might be upset or angered by the fact these companies are making money off their personal information, but they should always remember they are being offered a service for ‘free’. How many people would pay for subscriptions to Facebook or Twitter, or a one-off rate for any of the currently free apps which are being downloaded today? If you remove the commercial incentive for these companies, some (if not the majority) will cease to exist.

And while there should be protections for these companies, the two Senators are perfectly justified in suggesting this Bill. The user should be sufficiently educated in the ways and means of the digital economy to make an informed decision before entering into a contract with any service, product or platform, and irrelevant whether it is ‘free’ or not, normal rules should apply. Users need to have all the information available and this include the commercial value of their personal data.

Ultimately this is a Bill which is littered with potential pitfalls and hurdles for the digital economy. Warner and Hawley will have to be incredibly careful they do not stutter the promising progress of this segment. Transparency and privacy are two ideals which should be enhanced, but this should be done in a way which also encourages businesses to thrive, or at the very least, does not inhibit valid operations.

Amazon wants to be more in-tune with your emotions

Amazon is reportedly working on new technology which will be able to detect users’ emotional state by analysing their vocal patterns.

According to Bloomberg, the tech giant is working in collaboration with Lab126 to create a wearable device, which would be paired with a smartphone, to perceive emotions of the user. With eyes on 2017 patent that uses vocal pattern analysis to determine someone’s emotional state, the insight could be used through various health and wellbeing products, or even in the online advertising world.

This is perhaps one of the trickiest aspects of hyper-targeted advertising or personalisation. Context is king when it comes to serving people relevant adverts or products, though this not only depends on browsing history or financial circumstance, but also the emotional state of that individual at that time.

For example, an individual might have searching for new trainers or workout gear over the last few weeks, but if they are feeling frustrated, presenting an expensive gym membership at that point is unlikely to be the most profitable exercise.

Right now, this technology is nothing more than an idea, while the reports have not been confirmed by Amazon. It might prove to be too much of a complex equation to solve, but it will certainly be of interest to the thousands of brands around the world who are constantly searching for new ways to engage consumers, forcing an extra couple of quid out of the constrained wallets.

This also might prove to be one step too far for the consumer. To get this concept off the ground, buy-in would have to gained from the mass market. Consumers are already being asked to reveal a lot of data in exchange for ‘free’ services, but emotional wellbeing might be the breaking point. This is incredibly personal information therefore the value exchange would have to be very tempting.

The concept itself sounds very futuristic, which to some is daunting. The pace which the technology world is moving forward is staggering at times, though we are not entirely convinced there would be buy-in from consumers. It sounds like an interesting idea, but it might be too much too soon.

Microsoft starts ruffling privacy feathers in the US

This weekend will mark the one-year anniversary of Europe’s GDPR and Microsoft has made the bold suggestion of bringing the rules over the pond to the US.

Many US businesses would have been protected from the chaos that was the European Union’s General Data Protection Regulation (GDPR), with the rules only impacting those which operated in Europe. And while there are benefits to privacy and data protection rights for consumers, that will come as little compensation for those who had to protect themselves from the weighty fines attached to non-compliance.

Voicing what could turn out to be a very unpopular opinion, Microsoft has suggested the US should introduce its own version.

“A lot has happened on the global privacy front since GDPR went into force,” said Julie Brill, Deputy General Counsel at Microsoft. “Overall, companies that collect and process personal information for people living in the EU have adapted, putting new systems and processes in place to ensure that individuals understand what data is collected about them and can correct it if it is inaccurate and delete it or move it somewhere else if they choose.

“This has improved how companies handle their customers’ personal data. And it has inspired a global movement that has seen countries around the world adopt new privacy laws that are modelled on GDPR.

“Now it is time for Congress to take inspiration from the rest of the world and enact federal legislation that extends the privacy protections in GDPR to citizens in the United States.”

The rules themselves were first introduced in an attempt to force companies to be more responsible and transparent in how customer data is handled. The update reflected the new sharing economies the world had sleepwalked into; the new status quo had come under criticism and new protections had to be put in place while also offering more control to the consumer of their personal data.

GDPR arrived with little fanfare after many businesses scurried around for the weeks prior despite having almost 18 months’ notice. And while these regulations were designed for the European market, such is the open nature of the internet, the impact was felt worldwide.

While this might sound negative, GDPR has proved to be an inspiration for numerous other countries and regions. Brazil, Japan, South Korea and India were just a few of the nations which saw the benefit of the rules, and now it appears there are calls for the same position to be adopted in the US.

As Brill points out in the blog post stating the Microsoft position, California has already made steps forward to create a more privacy-focused society. The California Consumer Privacy Act (CCPA) will go into effect on January 1 2020. Inspired by GDPR, the new law will provide California residents with the right to know what personal information is being collected on them, know whether it is being sold or monetized, say no to monetization and access all the data.

This is only one example, though there are numerous states around the US, primarily Democrat, which have similar pro-privacy attitudes to California. However, this is a law which stops short of the strictness of GDPR. Companies are not on the stopwatch to notify customers of a breach, as they are under GDPR, while the language around punishment for non-compliance is very vague.

This is perhaps the issue Microsoft will face in attempting to escalate such rules up to federal law; the only attempt which we have seen so far in the US is a diluted version of GDPR. Whereas GDPR is a sharp stick for the regulators to swing, a fine of 3% of annual turnover certainly encourages compliance, the Californian approach is more like a tickling feather; it might irritate a little bit.

At the moment, US privacy laws are nothing more than ripples in the technology pond. If GDPR-style rules were to be introduced in the US, the impact would be significant. GDPR has already shifting the privacy conversation and had notable impacts on the way businesses operate. Google, for example, has introduced an auto-delete function for users while Facebook’s entire business rhetoric has become much more privacy focused. It is having a fundamental impact on the business.

We are not too sure whether Microsoft’s call is going to have any material impact on government thinking right now, but privacy laws in the US (and everywhere for that matter) are going to need to be brought up-to-date. With artificial intelligence, personalisation, big data, facial recognition and predictive analytics technologies all gaining traction, the role of personal data and privacy is going to become much more significant.

Facebook’s privacy conundrum

Facebook CEO Mark Zuckerberg has to do something about his firm’s reputation for data privacy, but it could it require destroying its own core business model.

At the F8 developer conference this week, Zuckerberg has been making claims no-one is surprised to hear. Facebook is all about user privacy, its not about making money anymore, just about offering a service its users care about. The PR machine is shifting through the gears, Facebook has to save its reputation before it’s too late.

This is perhaps the worst kept secret in Silicon Valley; Facebook does not care about data privacy, or at least it hasn’t cared in the past. It cares it was caught flamboyantly prancing around, above and all over the concept, but few will be surprised executives prioritized profits over privacy.

But here is the crossroads the firm faces; be disrupted or destroyed.

This of course sounds very dramatic, and perhaps we are taking poetic licence, but there is at least an element of accuracy to the statement. Zuckerberg needs to fundamentally redefine the business, moving away from the tried and tested business model, before regulators and legislators take Facebook out at the knees.

At the conference, Zuckerberg has been outlining Facebook’s journey forward. Updates will focus on creating a more ‘private’ experience, ushering users towards groups and chat locations which, theoretically, will prevent Facebook from fuelling its data machine. It seems the new business will be focused around two of the companies most popular applications, Messenger and WhatsApp, though this could potentially kill the tried and tested Facebook business model; hyper-targeted advertising.

One example of this is an update which will allow users to invite connections to watch videos in a private message or group. In years gone, this would be sacrilege to Facebook executives. If it is private, how can it be used to tune the advertising machine? Where is the opportunity to make money?

This is the risk Facebook is facing up to; its traditional business model is under threat. Its reputation for handling privacy is in tatters and the world is turning against Facebook. If it continues on the path of collecting and harvesting data in this manner, someone will eventually step in and stop it. Governments and regulators are cracking down on the data sharing economy, and Facebook has been made enemy number one.

But all is not lost. Facebook still has a couple of tricks up its sleeve. Firstly, the core social media platform is salvageable. It might look like a digital Yellow Pages today, but it by-gone years, it was a genuinely engaging platform. Somewhere along the line executives got grabby and started prioritising advertising over engagement, and the platform suffered as a result. If Facebook can rediscover the magic of old, all will be forgiven, such is the short-term memory of many consumers.

This might mean having to sacrifice the hyper-targeted advertising model, but if Zuckerberg’s claims on privacy are to be believed, Facebook might be moving away from it anyway.

Fortunately, with a reinvigorated platform, which people trust and enjoy, Facebook can bolt services on and beside it, as opposed to through it. This is perfectly feasible business model; running the platform as a loss-leader, maintaining a more transparent advertising business and also using the credibility to monetize premium services. And it might be a sensible direction for Facebook to go. It has worked before and will work again.

To make this idea work, Facebook will need a few things. Firstly, the ambition to explore news ideas. Secondly, smart people. And finally, R&D funds. Facebook has all these things in abundance.

Facebook has already shown its ambition with the launch of AR/VR, video platforms, online market places, dating applications and enterprise services (just to name a few). It has and will continue to attract some of the worlds most intelligent engineers and business people. And finally, Facebook has bags of cash.

This of course is taking Zuckerberg at his word. This might be nothing more than a ploy to generate positive PR. The hyper-targeting advertising model might simply be evolving with the help of small print and clever distractions. But, Zuckerberg surely is smarter than this. Another case of misleading the general public would surely be a step too far.

Zuckerberg might be waking up to the fact he cannot hide from this horrid and distasteful reputation he and his firm has developed. Perhaps Facebook has realised it needs to fundamentally change its business model. Maybe Zuckerberg wants to disrupt his own business before governments and regulators try to destroy it.

If 52% don’t understand data-sharing economy, is opt-in redundant?

Nieman Lab has unveiled the results of research suggesting more than half of adults do not realise Google is collecting and storing personal data through usage of its platforms.

The research itself is quite shocking and outlines a serious issue as we stride deeper into the digital economy. If the general population does not understand the basic principles behind the data-sharing economy, how are they possibly going to protect themselves against the nefarious intentions from the darker corners of the virtual world?

You also have to question whether there is any point in the internet players seeking consent if the user does not understand what he/she is signing up for.

According to the research, 52% of the survey respondents do not expect Google to collect data about a person’s activities when using its platforms, such as search engines or YouTube, while 57% do not believe Google is tracking their web activity in order to create more tailored advertisements.

While most working in the TMT industry would assume the business models of the Google and the other internet are common knowledge, the data here suggests otherwise.

66% also do not realise Google will have access to personal data when using non-Google apps, while 64% are unaware third-party information will be used to enhance the accuracy of adverts served on the Google platforms. Surprisingly, only 57% of the survey respondents realise Google will merge the data collected on each of its own platforms to create profiles of users.

Although this survey has been focused on Google, it would be fair to assume the same respondents do not appreciate this is how many newly emerging companies are fuelling their spreadsheets. The data-sharing economy is the very reason many of the services we enjoy today are free, though if users are not aware of how this segment functions, you have to question whether Google and the other internet giants are doing their jobs.

The ideas of opt-in and consent are critically important nowadays. New rules in the European Union, GDPR, set about significant changes to dictate how companies collect, store and use personal information collected by the service providers. These rules were supposed to enforce transparency and encourage the user to be in control of their personal information, though this research does not offer much encouragement.

If the research suggests more than half of adults do not understand how Google collects personal information or uses it to enhance its own advertising capabilities, what is the point of the opt-in process in the first place?

Reports like this suggest the opt-in process is largely meaningless as users do not understand what they are giving the likes of Google permission to do. The blame for this lack of education is split between the internet giants, who have become experts at muddying the waters, and the users themselves.

Those who use the services for free but do not question the continued existence of ‘free’ platforms should forgo the right to be annoyed when scandals emerge. Not taking the time to understand, or at least attempt to, the intricacies of the data-sharing economy is the reason many of these scandals emerge in the first place; users have been blindly handing power to the internet giants.

The internet players need to do more to educate the world on their business models, however the user does have to take some of the responsibility. We’re not suggesting everyone becomes an internet economy expert, but gaining a basic understanding is not incredibly difficult. However, it does seem ignorance is bliss.

Google caves in to employee activism… this time

The Silicon Valley search giant has decided to dissolve its AI ethical council, one week after it was created, in response to opposition from its own employees. But it’s not always so responsive to their concerns.

A week after the Advanced Technology External Advisory Council (ATEAC) was created, Google told  VOX that it has decided to cancel the project. Controversy has been following the project from the start, especially surrounding one council member Google enlisted. This prompted an internal petition that attracted the signatures of more than 2,300 employees and the resignation of one Council member. The sole purpose of ATEAC, with its members unpaid and the body without any decision-making power, seems to generate good PR. In that respect it represents a spectacular own-goal, so Google has bravely run away.

“It’s become clear that in the current environment, ATEAC can’t function as we wanted. So we’re ending the council and going back to the drawing board. We’ll continue to be responsible in our work on the important issues that AI raises, and will find different ways of getting outside opinions on these topics.” Google sent this statement to VOX.

This is not the first time that Google has “listened to employees”. In June 2018, Google famously “ditched contract with the US military” after more than 3,000 employees protested the company’s AI technology being used for military surveillance, the so-called project Maven.

But Google has not always respected its employees’ views. After almost exactly a year after he disclosed that Google was secretly working on a censored version of search engine for China, Ryan Gallagher, the reporter for The Intercept, kept the interested readers updated with the news that Google was closer to readiness with the so-called project Dragonfly. Some senior executives were said to be doing a secret “performance review” of the product, contrary to Google’s normal practice of involving large numbers of employees when assessing upcoming products.

Despite that more than 1,400 employees have condemned project Dragonfly and some have resigned, in addition to Google’s CEO having to testify in front of the Congress, Google looks to be rather determined to push forward with its China re-entry strategy. The Financial Times reported that the search and online advertising giant has recently suspended serving ads on two Chinese websites that evaluate VPNs, which would have helped users inside the Great Firewall to bypass the blocking. A local research firm told the FT that, considering the acrimonious nature of Google’s departure from China nine years ago, the company “may feel compelled to make additional efforts to curry favour and get back in the good graces to get approval to re-enter the market.”

So it is not clear whether it was due to the number of employees protesting against project Dragonfly being smaller or the resignations lower-profile that Google has decided not to back down, or it is simply more convenient to disband a rubber-stamp council or to discontinue a contract with the American military than resisting the temptation of the Chinese market and standing up to the censorial demands of the Chinese authorities.

Microsoft and BMW pair up for IoT Open Manufacturing Platform

Microsoft has partnered up with the BMW Group to launch a new initiative aimed at stimulating growth for IoT in the smart factory segment.

The Open Manufacturing Platform (OMP) will be built on the Microsoft Azure cloud platform, aiming to have four to six partners by the end of the year, to help grow an ecosystem and build future Industry 4.0 solutions. The smart factory segment is promising much with the emergence of 5G, but with every new concept there is scepticism; someone always needs to drag it towards the finish line.

“Microsoft is joining forces with the BMW Group to transform digital production efficiency across the industry,” said Scott Guthrie, EVP of the Microsoft Cloud and AI Group. “Our commitment to building an open community will create new opportunities for collaboration across the entire manufacturing value chain.”

“We have been relying on the cloud since 2016 and are consistently developing new approaches,” said Oliver Zipse, a board member at BMW. “With the Open Manufacturing Platform as the next step, we want to make our solutions available to other companies and jointly leverage potential in order to secure our strong position in the market in the long term.”

BMW is already a significant customer of Microsoft Azure, with over 3,000 machines, robots and autonomous transport systems connected with through the BMW Group IoT platform, which is built on Microsoft Azure cloud.

Openness is one of the key messages here as the pair bemoan data silos and slow productivity created by complex, proprietary systems. The OMP aims to break down these barriers through the creation of an open technology framework and cross-industry community.

For both, the objective of this group is relatively simple. At BMW, the team wants to improve operational efficiencies and reduce costs, partly by taking back control of the supply chain, while Microsoft just wants more people, processes and data on Azure. The more accessible the smart factory is, more companies will become cloud-first, and the more successful the OMP becomes, the more customers Azure gains.

The OMP will provide community members with a reference architecture with open source components based on open industrial standards and an open data model. Through openness, the pair claim data models will be standardised to enable more data analytics and machine learning scenarios and usecases. For Microsoft and the manufacturers, its great news, for the suppliers not so much.

Openness sounds like a great idea, but with any fundamental change comes consequence. There will be numerous companies who benefit considerably from proprietary technologies and processes, especially in traditional industries like manufacturing, though those who resist change will be the losers in the long-run. The world is evolving to a new dynamic, where openness rules the roost, resistance only means future redundancy.

Facebook calls on governments to help control content on the Internet

Facebook founder and CEO Mark Zuckerberg has governments and regulators to play a more active role in developing new rules for the internet.

In an op-ed for the Washington Post, Zuck claimed that the current rules of the internet have served his generation of entrepreneurs well, but “it’s time to update these rules to define clear responsibilities for people, companies and governments going forward.” He argued that companies like Facebook should not make daily judgments on the nature of all the content going through its platform just by themselves. “I believe we need a more active role for governments and regulators,” Zuckerberg said. For what he called the new rules for the internet, Zuckerberg proposed that the parties involved in the governance of the internet should focus on four areas.

“Harmful content” came on top of his list. Zuckerberg conceded that Facebook is having too much power over speech, and believed there is a need for an independent oversight body, dubbed by some media as a “Facebook Supreme Court”, which the company is building up. “First, it will prevent the concentration of too much decision-making within our teams. Second, it will create accountability and oversight. Third, it will provide assurance that these decisions are made in the best interests of our community and not for commercial reasons,” Zuckerberg explained the rationale when the content governance and enforcement plan was published last November.

Zuckerberg also cited the example of the company’s collaboration with the French government to highlight the Facebook’s willingness to work with regulators. Starting from January Facebook has hosted a group of French senior civil servants including those from the telecom regulator l’Arcep (Autorité de régulation des communications électroniques et des Postes) or the Ministry of Justice, whereby they can identify Facebook’s good practice that the delegation can approve. Incidentally, France raised nearly 38,000 requests for Facebook pages to be taken down in 2015, by far the highest number of all countries, according to a stat by Statista from a few years back, cited by the French media outlet Le Journal du Net (JDN) (pictured).

Second on Zuckerberg’s list is “election integrity”. Recognising the significant role Facebook data, and the misuse of it, has played in recent political campaigns, the company is implementing new rules related to political ads, in the run-up to the European Parliamentary election in May. Users are able to search who is behind a certain ad, how much is paid, the number of times the ad has been viewed, as well as the demographics of those that have viewed it. The “Ads Library” will be stored by Facebook for seven years.

However, Zuckerberg also recognised both the difficulty of identifying political ads (“deciding whether an ad is political isn’t always straightforward”), and the inadequacy of the current rules on political campaigns including online political advertising. Therefore, he was calling for both common standards for verifying political actors, and for updates on the laws to keep up with the fast-changing online realities. At about the same time, Facebook published a post to explain how “Why am I seeing this post?” and “Why am I seeing this ad?” work, to further its efforts to be more transparent.

“Privacy” is the next on Zuckerberg’s list. He focused on the topic of privacy in a long post recently, so he did not spell out the measures Facebook is taking. Instead, Zuckerberg was calling on governments and regulators from all countries to develop a common global framework modelled on the GDPR regime in the EU.

Last on the list is “data portability”, i.e. users should be able to seamlessly and securely move their data from one platform to another. This is centred on the Data Transfer Project (DTP) that Facebook is contributing to, together with Google, Microsoft, and Twitter, and is not directly related to governments or regulators. The project aims to build “a common framework with open-source code that can connect any two online service providers”. When the user initiates a data transfer request, DTP will use the “services’ existing APIs and authorization mechanisms to access data. It then uses service specific adapters to transfer that data into a common format, and then back into the new service’s API.”

Zuckerberg has extended plenty of goodwill recently, and there is no reason to question his sincerity. However, in addition to the lack of implementation details in his proposal, his call for actively working with governments and regulators can be a double-edged sword. On one hand, a global oversight body could be able to define a set of common rules that all internet companies can be regulated by and assessed on. On the other hand, how to avoid being dictated by the agenda of individual governments, especially in countries where the demarcation between politicians and professional, supposedly neutral civil servants is less clear, is a hard question to answer. For example, how fundamentally different is Facebook’s collaboration with the French government from Google’s clandestine efforts to satisfy the Chinese government’s censorship requests?

Facebook faces hyper-targeted advertising lawsuit

The US Department of Housing and Urban Development (HUD) has lodged a lawsuit against Facebook, challenging the hyper-targeted big data model which has made OTTs billions over the years.

Quoting the Fair Housing Act, the HUD has claimed Facebook is breaking the law by encouraging, enabling, and causing housing discrimination. The Fair Housing Act prohibits discrimination in housing and housing-related services, including online advertisements. Facebook’s advertising platform is said to discriminate individuals based on race, colour, national origin, religion, sex, disability and familial status, violating the Act.

“Even as we confront new technologies, the fair housing laws enacted over half a century ago remain clear – discrimination in housing-related advertising is against the law,” said General Counsel Paul Compton.

“Just because a process to deliver advertising is opaque and complex doesn’t mean that it’s exempts Facebook and others from our scrutiny and the law of the land. Fashioning appropriate remedies and the rules of the road for today’s technology as it impacts housing are a priority for HUD.”

Complaints were originally raised by the HUD last summer, though the two parties have been in discussions to come to some sort of settlement to avoid legal action. Reading between the lines, talks have broken down or the HUD leadership team wants to give the impression it is taking a more hardened stance against the social media segment.

Although it should come as little surprise Facebook is facing a lawsuit considering the ability for Mark Zuckerberg to stumble from one blunder to the next, this one effectively challenges the foundations of the business model. Hyper-targeted advertising is the core not only of Facebook’s business, but numerous other companies which have emerged as the dawn breaks on the blossoming data-sharing economy.

What is worth noting is this is not the first time Facebook has faced such criticisms. The American Civil Liberties Union (ACLU) has also challenged the social media giant, and earlier this month Facebook stating it was changing the way its advertising platform was set up to prevent abuses with the targeting features.

“One of our top priorities is protecting people from discrimination on Facebook,” said Facebook COO Sheryl Sandberg. “Today, we’re announcing changes in how we manage housing, employment and credit ads on our platform. These changes are the result of historic settlement agreements with leading civil rights organizations and ongoing input from civil rights experts.”

As a result of the clash with the ACLU and other parties, Facebook agreed to remove any gender, age and race-based targeting from housing and employment adverts, creating a one-stop portal instead.

According to the HUD, Facebook allows advertisers to exclude individuals from messaging based on where they live and their societal status. For example, whether someone is a parent or non-American, these categories have been deemed discriminatory. Facebook also allows advertisers to effectively zone off neighbourhoods for campaigns, which is also deemed a violation of the Act. By bringing together data from the digital platform and other insight from non-digital means, HUD is effectively challenging the legitimacy of digital and targeted advertising.

As with other similar cases, the HUD is bringing attention to the light-touch regulatory landscape for the internet economy. While traditional advertising is held accountable by strict rules, the internet operates with relative freedom. This is partly down to the age of mass market media online, it is still comparatively new, and the fact few bureaucrats understand how the data machines work.

What is worth noting is that this is an incredibly narrow focus for the HUD, though should it be successful the same concepts could be applied, and other elements of the Facebook hyper-targeted advertising model could be challenged.

Facebook might be the target here, though many companies will be watching this case with intrigue. Precedent is a powerful tool in the legal and regulatory world, and should the HUD win, the same business model which is being applied elsewhere would be compromised also.