Trouw – Biased algorithms show our human shortcomings and that is a blessing (December 2019)

Published @ https://www.trouw.nl/opinie/algoritmen-tonen-het-menselijk-tekort~b3238ee7

If algorithms discriminate, it’s because they’re programmed that way by humans. We can and should take advantage of this, write Thijs Pepping and Sander Duivestein, of SogetiLabs’ Research Institute (VINT).

Apple’s virtual assistant was initially able to find prostitutes and Viagra stores, but not abortion clinics. Thanks to a built-in racial bias, an algorithm widely used in the US health care industry concluded that white patients are at higher risk for diabetes and high blood pressure than black patients. There are numerous examples, which show that artificial intelligences (AI) have prejudices.

It is a shocking discovery that not all AI systems are inclusive and do their calculations indiscriminately. But it is too easy to conclude that technology falls short. For this failure at the same time exposes human’s inner workings. Subcutaneous prejudices become insightful and even magnified by AI. As is often the case with technology, what it means to be human is presented on a silver platter by these biased algorithms. Both with our good, and ugly sides. There’s clearly some work to do.

fighting bias in algorithms

The American magazine Science recently published that the racial preference of the widely used healthcare algorithm arose because the data used to train the algorithm was not objective. Society in the US spends less money on black patients than on equally ill white patients. Therefore, the algorithm underestimated the real needs of the black citizen in need. This is how an unconscious injustice in society ended up in the algorithm. There is no question that we have to fight the bias in algorithms. Technology has the potential to scale up. This means that an inequality in an algorithm can quickly have major consequences for large groups of people.

ethical implications of technology

Awareness of the ethical implications of technology is growing steadily. For example, initiatives are now being developed to actively find and remove bias from algorithms. Researchers at Delft University of Technology are working on software that makes ‘ethical considerations’. Scientists from the American universities Stanford and Massachusetts argue in favor of pre-filtering ‘undesirable behaviour’ by algorithms. However, it is much easier to change algorithms than to change people. Software on computers can be updated, but the so-called wetware in the human brain has so far proven to be much less flexible. The work of psychologist Daniel Kahneman and everyday reality, among others, show the fallibility of our rationality and also how difficult it is to overcome prejudice.

face the facts

Help from algorithms offers a solution. Artificial intelligence with the accompanying increase in scale is an effective way to force people to face the facts. Failing algorithms show the consequences of a lack of diversity in development teams, gender bias in organizations or injustices in society. What used to be complicated science for sociologists, psychologists and philosophers has now become reality thanks to the use of artificial intelligence. This makes the internet the world’s largest social experiment, which people should use to their advantage.

Smart technology holds up a mirror to us that provides an extremely intimate glimpse into the human psyche, self-image and society. Tinkering with algorithms and improving and enriching data is necessary. At the same time, society itself has to work on the homework that algorithms give us. Never before has society been so openly confronted with its own shortcomings. If we dare to confront it, this will be a blessing.

ICT Magazine – Cobots will evolve from being smart to being creative (December 2019)

Published @ https://www.ictmagazine.nl/achter-het-nieuws/met-cobots-van-slimmigheid-naar-digitale-creativiteit/

It is the time of counting down and predictions. Gartner, Forrester and many others stumble over each other with lists of what technology will bring us next year. My thesis is: new technology will become an important sparring partner in creativity.

While IT has mainly been used in recent decades to speed up and simplify processes, we will benefit from the emergence of creative cobots in the coming years. We are now becoming more and more accustomed to industrial robotic arms, smart package movers and other technology in which artificial intelligence makes our work easier. But creative thinking power still mainly comes from within ourselves. Soon, cobots will act as catalysts of creativity and will also help us come up with completely new ideas. Dystopian? No, we can use the extra creativity in a world that is becoming more complex and challenging.

from analyzing to generating

Artificial Intelligence (AI) has broken through in many areas in recent years. For example, machine-learning technology can not only recognize and analyze patterns, but thanks to Generative Adversarial Networks (GANs), AI can now also create things. Synthetic texts, images and artificial video images will become commonplace in the coming years. In his book The Creativity Code: How AI is learning to write, paint and think, mathematician Marcus du Sautoy predicts the influence of smart machines on our creativity. One of the most talked-about examples at the moment is the AI-created painting Portrait of Edmond Bellamy, which sold for $432,500. While many people are still considering whether machines can be creative themselves, it is much more relevant to ask what the impact of these systems will be on human creativity.

catalyst of creativity

Now that these systems are increasingly capable of creating, they are acting as catalysts for creativity. Take the GPT-2 neural network, for example, developed by OpenAI, the organization that aims to make AI models accessible to everyone. Recently released, this model is trained to guess the next word in a text (see the “Neural Network” box). Although the synthesized text is assigned a reliability score of 6.91 on a scale of 10, the neural network is not yet so perfect that all answers provided can be used immediately. However, as a catalyst in an idea process, this is not necessary either. The computer generates something new and, depending on the parameters, this deviates much or little from the human train of thought. In the future, the digital assistant may provide input at any time as needed. Based on previous conversations and generated ideas or books read. Or, for example, related to inspiration from colleagues from other sectors.

neural network

Fill in the words “This column for ICT/Magazine is about…” in OpenAI’s GPT-2 and the neural network itself comes up with the following suggestion: “This column for ICT/Magazine is about cyber security, not about hacking and surveillance. First, one very important but little-mentioned fact: Keyes’ claim that NSA agents “obsess over suspicious people’s email activity” is very far from the truth. The Office of the Director of National Intelligence, which handles and maintains the nation’s signals intelligence […]”

accelerate innovation

If new technology is going to help in the field of creativity, then the acceleration of innovation is the obvious choice. Recently, the pharmaceutical industry was in turmoil over the announcement of US biotech company Insillico Medicine, which had designed a molecule that fights the disease fibrosis. The company did this thanks to GANs. In 46 days, brain technology was able to come up with 30,000 different molecular designs, one of which was eventually tested on a mouse. According to scientists, the synthesized molecule showed the characteristics of a drug. This represents a breakthrough for the development of expensive drugs. The development and marketing of a new drug takes an average of 10 years and $2.6 billion. With the use of GANs, the number of molecules designed was scaled up and the turnaround time of the so-called in vitro test phase was drastically reduced. The costs of Insillico Medicine for the first development trajectory, according to its own words, amounted to about $150,000. In this interplay, AI acts as a catalyst for creativity and the innovation process.

democratizing creativity

Creative machines not only accelerate innovation, they also democratize digital creativity. In other words, creativity is becoming more accessible to everyone. With Nvidia’s GauGAN, for example, a simple drawing is made photo-realistic. This allows the user to visualize his idea much more powerfully in a brainstorming session. With pix2code’s OpenSource software, a visualization is converted directly to HTML code. An idea is suddenly digitized and the more such systems are linked, the faster an idea becomes reality.

Narrow creativity

This development fits in seamlessly with the statement of Chris Wanstrath, CEO of GitHub; “The future of coding is no coding at all.” In this way we are entering an era in which the concept, a thought or a joke is becoming increasingly important. Man is assisted in this by technology. And of course, a lot still needs to be done before creative cobots are a smooth part of our thinking and working process. The current examples show that they are a good catalyst for creativity in one specific situation. Call it narrow creativity in line with artificial narrow intelligence (opposed to artificial general intelligence). It is the beginning of what will soon become our creative cobots. Creative assistants who will challenge us to bring out the best in ourselves. Regardless of whether we want to become the next Picasso, Einstein or Steve Jobs.

Thijs Pepping is a trend analyst at SogetiLabs’ Research Institute (VINT).

Dutch financial times – Overtrump ‘OK Boomer’ and bridge the generation gap (November 2019)

Published @ https://fd.nl/opinie/1326305/overtroef-ok-boomer-en-doorbreek-de-generatiekloof-ofl1cazj0ZKU

With the rise of the so-called internet meme (internet joke) ‘OK Boomer’, young people indicate en masse that they have little confidence in a dialogue with older generations. OK Boomer is synonymous with ‘chatter away, your time is up, baby boomer.’ The social media hit should be a warning to drivers of the baby boom generation: make way for others, or listen to the young and adapt to the new times. CEOs must take personal and sincere action to break the generation gap between young and old.

“OK Boomer” comes from the United States and originated on the social media platform TikTok after an elderly man announced in a video that millennials and Generation Z just don’t want to grow up. The video went viral and the meme “OK Boomer” marked a massive backlash from youth. It has now become a symbol of the disturbed relationship between the generations.

The new generation feels burdened with a long list of problems caused by the older generations. Think of global warming, increasing inequality in the world or the lingering Brexit. These problems affect the most important values of the new generation, as the World Value Survey also shows. Many young people no longer accept that companies and governments do not take responsibility for these problems.

A growing number of companies also see this attitude among young people and are now experiencing the pressure to change. Not surprising, because already 63% of the world population consists of (future) professionals who were born after 1980. For example, this summer, 181 major US companies, including Apple, Salesforce and JPMorgan Chase, stated in a letter that their organizations will give priority to improving society over the profit motive.

The rest of the business community should also follow suit and connect with the values of the new generations. Therefore, remove corporate social responsibility from the periphery and bring it to the center of the organization. Let executives take an active stance on global problems and come up with concrete solutions. Moreover, activism must take place in the workplace itself, with jobs that contribute to self-development and the building of a better world.

A better future can start tomorrow if both baby boomers and young people put aside their disdain and start building a future together.

Menno van Doorn, Sander Duivestein and Thijs Pepping are trend analysts at IT service provider Sogeti.

ICT Magazine – Is your organization prepared for Deepfakes? (October 2019)

Published @ https://www.ictmagazine.nl/achter-het-nieuws/voorbereid-op-deepfakes/

The CEO of a renowned energy company in the United Kingdom thought he was talking to his boss from the German parent company. In his recognizable English with a typical German accent, he was instructed to transfer €220,000 to a Hungarian supplier. In good faith, the executive immediately transferred the money, after which it turned out that it was the bank account of cyber criminals.

As far as we know now, this is one of the first cyber attacks where deepfake voice technology has made a difference. In the meantime, the insurer Euler Hermes Group has reimbursed the amount. The voice was developed via artificial intelligence and was hardly distinguishable from the real thing. The question remains what to do against such digital practices, where the use of AI is becoming the new normal. Deepfakes are coming and will have an impact on the architecture, security, identity management, marketing and undoubtedly much more within the IT household of an organization.

digital people

Deepfake technology has been in the news a lot lately. Not for nothing. Generative Adversarial Networks (GANs[1]) provide a breakthrough in the generation of faces, products and basically any kind of data and material. The technology is also being used to give people with ALS their own voice back. The first company to offer stock photos realized with AI is already a fact. 100,000 portraits are offered on the generated.photos website. Faces of digital people with no history, thoughts or privacy-related wishes. Moreover, the technology is becoming increasingly accessible and manageable. Using the Ebsynth application, every consumer can deepfake their own film in no time at all. The Chinese app ‘Zao’ puts any random person in major TV shows or well-known film classics with a few photos.

for hackers

Like other technology, the impact of deepfakes is extremely multifaceted. The deepvoice incident of the director of the energy company is not an isolated incident. Last summer, security software company Symantec reported three cases where deepfake audio technology had been deployed where the voice of executives requested their financial colleagues to transfer money. The technology far from perfect at the moment, but experience shows that imperfections are camouflaged by background noise or the imitation of bad connections. Deepfake technology is also used to enhance phishing attacks. For example, messages from banks, healthcare providers, an auction site or any other random e-mail provider can no longer or hardly be distinguished from the real thing.

cat and mouse game

The race in this new cat-and-mouse game is now in full swing. It seems that reliable, fast and scalable detection of deepfakes is still lacking. Digital forensic scientists are working hard on new tools to better detect deepfakes. At the same time, experts warn that this smart technology is only going to get better. For example, Hao Li, deepfake pioneer and researcher, foresees that it will take no longer than three years before advanced deepfakes are really indistinguishable from reality. Blockchain technology can provide solutions. Data, for example in the form of video, can no longer be manipulated once in blockchain. However, blockchain promises are still in an experimental phase. Google also recently shared an open source database of 3,000 original, manipulated deepfake videos as part of its effort to accelerate the development of detection tools for deepfake technology. Facebook will release a similar database at the end of this year. With the US elections of 2020 looming, they will no doubt want to avoid scandals like the one before with Cambridge Analytica.

anticipate

The rapid developments in deepfakes require new business measures. Every organization must prepare for identity protection measures. Digital onboarding processes may need to be revised for the use of biometric data of (future) employees. Data must be tested for reliability. At the same time, identity protection also requires data to be secured so that it cannot be used for deepfakes.

According to the recent Dutch National Cybersecurity Awareness Survey, awareness about phishing emails, cyber attacks, data breaches, spyware, keyloggers and port scans has increased. But the word ‘deepfake’ does not yet appear in the Dutch report. Citizens must realize that images and sound can also be synthesized, in such a way that it is hardly possible to distinguish whether something is real or fake. Who knows, maybe soon courses will follow ‘How to deal with deepfakes’ and hired so-called red teams will use deepfakes to test the security of organizations. Because who proves that it is a deepfake video that marketers use on social media, for example?

How will the value of facial recognition software continue to develop in the coming years? And what should the IT manager do with these developments?

real encounters

In this new phase of synthetic media, human physical contact seems more valuable than ever. How ironic is that in an era in which everything is digitized and digital innovations follow each other at a rapid pace? Verification of the truth may soon require you to look someone in the eye and hear words directly. Nothing gives more certainty than a person standing in front of you with a tangible passport in his or her hands. Or is this just one of many iterations of our quest for optimal synergy between our digital and analog lives? Either way, we have to prepare our IT landscape for this.

Thijs Pepping is a trend analyst at SogetiLabs’ Research Institute (VINT).

[1] See the article ‘GANs: tooling of tomorrow’ in the April edition of ICT/Magazine.

 

ICT Magazine – The CRM-system is reaching its end (September 2019)

Published @ https://www.ictmagazine.nl/achter-het-nieuws/het-crm-systeem-heeft-zijn-langste-tijd-gehad/

The tragedy behind many system development is that it starts with good intentions and often ends as a monstrosity. Currently, experts are tumbling over each other to declare the promising agile working dead in order to go back to the roots for new impetus. We’ve known for a long time about golden-oldy CRM that we’re dealing with a monstrosity. Old-fashioned power relations between customer and supplier lead to a one-sided relationship that is no longer up to date.

However, the Kafka-like situations that sometimes arise don’t stop companies from going ahead with it. Leading experts remain skeptical about the future of CRM systems, which often play a central role in interacting with consumers and bridging the gap between IT and business. Here too, going back to the roots offers a way out. Building relationships, that’s what it was all about, right? There is always a question of reciprocity. This means that it is no longer the supplier who should be in charge of managing the customer, but the consumer himself.

Good relationships

Customers need tools to manage their relationships with organizations and there is a whole world to win. Doc Searls, American author of The Intention Economy: When Customers Take Charge, has been fighting for a CRM revolution for years. He believes that CRM is the acronym for the unilateral storage of customer data fueled by data-espionage practices. This has gone a long way from the original intention of building good relationships with customers. In recent years, attempts have been made to change this. For example, social CRM was added, there is more attention for the so-called customer experience and the Frequently Asked Questions have been moved to a chatbot. Despite this, the essence of CRM remains the unilateral storage of customer data. CRM continues to reduce the consumer to his data.

Infantile and undesirable

As long as we collect as much data as possible to make the perfect ‘digital twin’ of the customer, we can serve him optimal. That is often the current thinking. Meanwhile, we forget what it really means to maintain an equal relationship. In the meantime, privacy legislation is firmly in place, which forms the basis for such technological exploitation. We are also making progress thanks to the renowned Cambridge Analytica misuses and our growing need for responsible handling of personal data. Consumers are increasingly aware of the value of the digital emissions they leave behind on the web. People don’t want to be a data toy in the hands of government agencies and commerce. We want to control our own lives and determine ourselves with whom and how we enter into relationships, regardless of what all those data points say about our humanity and the deepest desires of consumers. The most important reason for switching towards ‘consumers in control of their own data’ is that the data quality also improves in this way. Customer Relationship Management then becomes a Vendor Relationship Management (VRM) system. The customer himself determines which rights are assigned to which organizations. In this way, his intentions are made known through his own data vault.

Customer at the helm

That’s why Doc Searls advocates a self-sovereign identity. Consumers should own their own digital identity. The customer is in control and organizations move around it. This gives the customer control over how their own personal data is spread over the internet. The same consumer determines, for example, when it is time for a change of address and where and how this must be adjusted in which systems. That means a radical revolution of the system. Actually, this idea is not new, but has been around for more than a decade. The question is why we weren’t ready for that back then. Even more important is the question: are we ready now? According to Searls, the answer is clear. Privacy violations have made the internet a bad place and plenty of proverbial calves have drowned. Searls sees hope in the blockchain. This technology can make such a revolution towards consumers-in-control possible.

Dreaming away from the relationship crisis

All the effort that is now being put into storing as much customer data as possible can actually be thrown overboard. There is no longer a need for data espionage by slipping through the cracks of the law. The time and energy this saves can be invested in the question of how we can improve customer relationships in a meaningful reciprocal way. Although consumers do not want to be manipulated, they naturally continue to look for more and better products and services that match individual wishes as closely as possible. The ultimate relationship between companies and customers should no longer be based on primary profit and revenue goals, but on higher goals such as self-fulfilment, development and finding and creating meaning in life.

That may sound like a naive dream, but a different starting position will help. After all, a lot of pain has already been felt. Both with consumers due to privacy scandals and ‘computer-says-no-practices’ and with companies that are finding it increasingly difficult to bind customers. In short, it is high time for a turnaround of CRM systems.

Thijs Pepping is a trend analyst at SogetiLabs’ Research Institute (VINT).

Volkskrant – Do not leave the control of your identity to tech companies (September 2019)

Published @ https://www.volkskrant.nl/columns-opinie/laat-de-grip-op-uw-identiteit-niet-aan-techbedrijf-over~bb68f7aa/

Facial and voice recognition and deepfake technology require laws that protect our identity, say Sander Duivestein, Thijs Pepping and Menno van Doorn, trend analysts at IT service provider Sogeti.

NOS presenter Dione Stax was recently shocked by the news that she is playing the lead role in a sex video. Using deepfake technology, her head is glued to the body of a porn star. In China, the ZAO app, with which you can make yourself a protagonist in a movie in no time, is a hit. Using the indistinguishable voice of a British CEO, criminals were recently able to loot 220,000 euros.

Everyone can now apply the most bizarre tricks with smart technology. Funny, but at the same time very risky. If measures are not taken quickly, society will suffer massive loss of face, in many respects. It is high time that citizens regained control.

Hollywood trickery becomes a piece of cake

Techniques such as image recognition and software that recognize voices have been around for some time, but until recently they were not that advanced. With artificial intelligence getting better and smarter at breakneck speed, playing tricks with someone else’s face or voice is a piece of cake. And it is precisely the combination of technical breakthroughs and the large-scale use of image and speech technology that is so toxic.

Police forces, airlines, smartphone suppliers, football stadiums and fast food restaurants; many organizations are building a smart infrastructure that will no longer allow citizens to live physically or digitally anonymously. All this facial and image recognition technology is used with or without permission and often without the public having any idea.

Citizens also have little influence on all these developments. It is no longer possible to take a plane trip without having your face photographed at customs. Hardly anyone can protect your data on your own computer. Let alone that as a citizen-customer you are in a position to protect your face or voice against, for example, the big technology companies of this world.

Our face is not owned by us

Albert Heijn, a Dutch retailer, proudly presented their new store concept last week in which smart technology makes it possible to do groceries without scanning products. Cameras register how a customer moves through the store and see which products go into the basket. Sensors on the shelves sense that a customer picks up something or puts it back. The cameras determine the position of customers without facial recognition, the large grocery store explained. But which law or rule prevents Albert Heijn from making facial payments possible? Our face is no longer our own. By looking into cameras or talking to digital assistants, we are constantly giving away a piece of ourselves.

When faces and voices are massively used or abused for whatever purpose, an identity crisis is lurking. It will soon be difficult to prove who you are when your face and your voice are no longer yours. People who do not know privacy and know that they are recognized or followed everywhere, tend to suppress certain behaviors or correct themselves into what is called permissible behavior. This form of deterrence is legally called the ‘chilling effect’: it is freedom subject to reservation.

Punishment for stealing voices

Canadian psychologist and cultural critic Joran Peterson argues for punishment for stealing a human’s voice. Regardless of whether it’s meant to be funny or not. That is his wry conclusion after others had created a synthetic version of his voice which could be used to say anything, without him knowing. Peterson calls for the protection of the voice as an integral part of personal identity.

Facebook disabled the feature that identifies faces in photos this week. Such an action shows that we are still dependent on self-regulation. But we should not leave the grip on our identity to large tech companies. They make billions from consumer data.

Personal freedom should be the starting point for new measures, such as better legislation for the recording, storage, security and use of facial and voice technology. What surveillance society do we want to live in? That is what the public debate should be about. We can no longer afford to same amount of mistakes as we could earlier in the digitalisation project, because we can easily change a password, but not our own face. In short: it is up to politicians to protect our personal identity.

Sander Duivestein, Thijs Pepping and Menno van Doorn are trend analysts at IT service provider Sogeti.

NRC Handelsblad – Blind spot for radicalization machine (August 2019)

Published @ https://www.nrc.nl/nieuws/2019/08/16/blinde-vlek-voor-radicaliseringsmachine-a3970296

Radicalization | The influence of internet forums is shown, write Sander Duivestein, Menno van Doorn and Thijs Pepping.

An internal FBI document was leaked a day before the attack in the US border town of El Paso, which killed 22 people. The agency stated that “conspiracy theory-driven domestic extremists” are a growing domestic terrorist threat. The FBI expects the “intensity” of conspiracy theories to increase in the run-up to the 2020 presidential election and fears that more people will be motivated to use violence. The FBI recognizes that conspiracy theories are spreading faster through the internet and social media, but the agency clearly has a blind spot for the polarizing forces of the Internet as a radicalization machine. As long as the FBI continues to view the Internet as a neutral catalyst of communication, radicalization and violence will not stop

According to the FBI, there are three categories to earn. First of all, there is a category that is “anti-government.” Conspirators believe there is an international elite, a New World Order (NWO), pulling the strings behind the scenes. Thus, the United Nations would be an international conspiracy to affect American sovereignty. The second category consists of “identity-based” conspiracy theories. Jewish agents secretly control Western countries, and there is said to be a community of radical Muslims waiting for action. Finally, there is the category of “extraordinary politics”. For example, the “Pizzagate conspiracy theory” — the idea that Hillary Clinton is at the head of a pedophile network operating in the basement of a pizza restaurant in Washington. The “QAnon conspiracy theory” also falls under this category. QAnon argues that the president is a puppet of the deep state, a secret alliance of powerful men who hold the country in an administrative grip behind the scenes.

Politically incorrect internet platforms such as 8chan and Gab, but also ‘neutral’ platforms such as Facebook, Twitter, WhatsApp and YouTube contribute strongly to the viral spread of all kinds of conspiracy theories. Through its own visual culture a reality is made where anonymous hard jokes are made about Jews, Muslims, people of color of women. Conspiracy theorists can add information to these platforms to strengthen or twist the conspiracies. This creates a mishmash of funny memes, far-right and national-populist ideas, and deliberately spreading disinformation. That has consequences. Change online manifests to guides for targeted actions. Violence goes viral and becomes a meme.

How far the pitch-black humor goes became clear after the attack in El Paso. 8chan users shared a Google spreadsheet that tracked attacks with scores in rankings: date, location and kills. Anders Breivik and Brenton Tarrant, who are spawned on 8chan as “martyrs,” are at the top of the list. Research collective Bellingcat calls this the ‘gamification’ of mass violence and terrorism, partly because attacks get ‘high scores’ based on the number of deaths. Moreover, the way in which the attacks are promoted has similarities with an online game. For example, the perpetrator of the attack in Christchurch had a camera mounted on his helmet and broadcast the images of his shooting live via Facebook. To the viewer, it looked like he was participating in a First Person Shooter video game. It was an extremely conscious choice to reach as wide an audience as possible. The perpetrator reached a large audience: in the first 24 hours after the attack, Facebook deleted 1.5 million videos worldwide.

Social media are very effective radicalization machines. The recommendation mechanisms on the forums are not interested in what is true, or in what is healthy for democracy. They only want to keep the attention of the customer for as long as possible, after all, that is where the money is. Social media are optimized for stimulating emotions like anger, fear and excitement. These ‘active’ type of emotions respond to more shares and likes than passive emotions like calm and balanced messages. So social media play on tiggered emotions, resulting in polarization. There are plenty of regrets in the technology sector.

Guillaume Chaslot, one of the creators of YouTube’s recommendation algorithm, is now taking action against his own creation because it causes the recommendations to push to extremes. Chris Wetherell, one of the founders of Twitter, openly regrets the ability to retweet tweets: “We handed a 4-year-old a loaded gun in it,” he said in an interview. Even Mark Zuckerberg, CEO of Facebook, indicated earlier that the “dictatorship of the likes” must change.

As we’ve stated earlier; the FBI warns of more violence in the run-up to the election. Their focus is on spreading conspiracy theories and not on the radicalizing mechanism underlying the violence. At a time when there is a growing consensus on this, from the founders of the internet to academics, concrete action must be taken. This requires more intervention from governments and more accurate analysis of security services. Moreover, stronger intervention from the FBI is desperately needed. The next step is then obvious. While the FBI now only regulates content, so that you cannot share nude photos, for example, it must also intervene in the architecture of the platforms. The polarizing workings of these platforms must be dismantled to deradicalize discussions.


Sander Duivestein, Menno van Doorn and Thijs Pepping are trend analysts at SogetiLabs’ Research Institute (VINT)

Live @ NPO Radio 1 – The impact of the app Deepnudes and deepfakes (July 2019)

Published @ https://www.nporadio1.nl/langs-de-lijn-en-omstreken/onderwerpen/506032-nieuwe-app-genereert-neppe-naaktfoto-s

An interview regarding deepnudes; synthetic media that ‘remove’ clothes. My take was: of course we need to act against this, but ‘you ain’t seen nothing yet’ when it comes to deepfakes and synthetic media. We need to create more awareness of this social media revolution which can be summarised as ‘the age of synthetic media’. 

Het Financieele Dagblad – Deepfakes are a growing threat for our democracy (July 2019)

Published @ https://fd.nl/opinie/1305951/deepfakes-zijn-een-groeiend-gevaar-voor-onze-democratie-ofl1cazj0ZKU

While Dutch politicians are concerned about the use of smart algorithms, technological developments are advancing at a rapid pace. For example, the US Congress is concerned about the advance of so-called deepfakes. This smart technology makes it possible to quickly and cheaply change real people online into indistinguishable digital dolls. With this technology people can make you say the strangest things. Making a fake video with deepfakes becomes as easy as telling a lie.

With the campaign machines warming up for the 2020 US presidential election, the question is whether spin doctors can resist the temptation to keep deepfake technology at bay. Deepfakes can be a major threat to our democracy. This calls for a reappraisal of objective journalism.

Facebook posted a video last month of an interview with Nancy Pelosi, the Democratic Speaker in the US House of Representatives, in an apparently intoxicated state. In this fake version of the original interview, the video speed is slightly slowed down. Pelosi’s voice was also modified. It looked as if she had looked too deeply into the glass. The video was widely shared on social media. A fake interview like this spreads very quickly and the truth can hardly catch up with a lie. And that has far-reaching consequences. For example, simply faked images of the alleged kidnapping of a child led to public lynchings in India. A deepfake video of President Ali Bongo of Gabon sparked a military coup and an accompanying political crisis.

Deepfake technology is creating a world in which we can no longer trust our eyes and ears. The great danger is not so much that the lie is elevated to the truth, but that the credibility of the truth is tarnished. There is nothing real anymore in a future where everything and everyone can be imitated. Everything can be dismissed as a lie or fake news. You can already hear President Trump regularly talking about fake news. Is that true or not? It is hardly controllable. Who or what can we trust then?

In her book The Origins of Totalitarianism (1951), political scientist Hannah Arendt wrote: “The ideal subject of totalitarian rule is not the convinced Nazi or the committed Communist, but people for whom the distinction between fact and fiction, true and false , no longer exists’. Mistrust is a precondition for authoritarian regimes. Already 30% of Americans say that large amounts of fake news cause them to consume significantly less news. This creates an apathy for reality. With the use of deepfake technology, considerable investments are made in the mistrust of citizens. In this way, the foundations of our democracy come under heavy pressure.

So something has to be done quickly. Long-standing core values ​​such as objectivity, independence and reliability are more important than ever in this post-reality era. That requires a reappraisal of objective journalism. Journalists should invest heavily in fact checkers.

The recent role of citizen journalism network Bellingcat in the investigation into the downing of the MH17 flight shows how digital evidence can contribute to uncovering the truth. 3DUniversum, a start-up originating from the University of Amsterdam, and the University of California at Berkely are working on an antivirus for deepfakes. Fake videos will soon be recognized based on the blink of your eyes. An app like TruPic validates images for truth features before they are definitively registered in the blockchain.

Technology must be fought with technology. This is possible on the initiative of objective journalists’ platforms within clear frameworks drawn up by the government. In addition, a healthy education in media literacy is important. At the end of last week, the European Council met to discuss the influence of deepfakes and fake news on the recent European elections. A bill has now been submitted in the United States to limit the impact of deepfake technology.

In an era where the new generation has been born with the internet, there is little point in limiting technology. Technology is not right or wrong. It’s about how you deal with it. And we just need to do that in a good way, so that our hard-won democracy can survive.

Sander Duivestein and Thijs Pepping  are trend analysts at SogetiLabs’ Research Institute (VINT)

ICT Magazine – The IT-manager as midwife (June 2019)

Published @ https://www.ictmagazine.nl/achter-het-nieuws/de-it-manager-als-vroedvrouw/

Making decisions about IT systems, advising management and ensuring that the knowledge of your colleagues remains up to date in the world of rapid digital developments. These are just a few activities from the daily practice of the IT manager that is more diverse than ever. While technology was anything but in the spotlight thirty years ago, its impact on our lives is now daily news. And new dimensions are added every day. For example, market analyst Gartner has labeled “digital ethics” as this year’s strategic trend. The IT manager can no longer ignore it: his or her work has an important ethical dimension that only seems to be increasing. Difficult? Not by definition. Use the midwife as a source of inspiration and learn to ask questions like Socrates.

Philosophy

The Greek philosopher Socrates often compared his philosophical method with the work of his mother. She was a midwife. He saw himself as a midwife of knowledge who helped others to give birth to true ideas, propositions and opinions. We learn from Socrates that sharp questions quickly reveal where “true knowledge” ends and assumptions, ideals, opinions, and beliefs begin. According to Socrates, this is where “evil” lay. People often do wrong things because they base their actions on no or wrong knowledge; they do it out of ignorance. “The good” thus comes from the discovery of true knowledge.

Digital ethics

What’s in it for us in 2019? We live in a time when digital ethics are becoming increasingly important. For example, when using smart algorithms that determine in a self-driving car whether the pedestrian or driver should be spared in a critical situation. Or the intelligent assistant who will tell me how to live a healthier life, which passion, partner or career choice I should make. In the development of new technology, the IT professional is therefore increasingly confronted with ethical issues. At the same time, the examples mentioned are usually still fairly far from his or her bed. How do you bring these ethical challenges into daily practice? Now the role of midwife comes in handy. By asking critical questions, assumptions, implicit preferences and the opinions of IT managers, other employees and the company quickly emerge. We distinguish three perspectives about which questions must be asked.

1. A nudge in the right (?) direction

The bright red icon at the top right of the app that unconsciously demands your attention, the disappearance of a natural resting point through endless news feeds, or the automatic playback of the next video. Or what about pre-selecting an option by default when asking for information? These are all examples of so-called nudging, neuromarketing and persuasive technologies that have a major impact on our psychological state and the choices we make in our lives. You could also call it ‘hacking our human mind’. It’s that nudge in the right direction or that seductive or even addictive mechanism in the software. Who determines which conscious and unconscious pushes are built into your applications and systems?

2. Prejudices in artificial intelligence

The fruits of artificial intelligence (AI) are now fully reaped. These intelligent systems must also be subjected to the midwife method. After all, we want to avoid practices like Amazon’s experimental recruitment system that favored men over women. This also applies to predictive policing situations, where data is not critically questioned or cleaned. As a result, discriminatory assumptions from daily practice end up in the predictive system. Data scientist Karen Hao identifies three key stages in which biases can creep into AI systems. First of all, when framing the problem. For example, what is the purpose of the smart lending system? Is that making maximum profit or providing as many people as possible with sustainable credit? The second stage is to collect data that may not be representative or reflect biases. In the last phase it is determined which data attributes count and which weighting value is assigned to this data. In short, how is the data applied? As much as we seek a clinical and objective approach to data science and deep learning, human subjectivity continues to play a major role. Correlative insights are quickly misinterpreted as causal connections. The IT manager must ensure that the subjective and ethically charged choices are made explicit and consciously.

3. Dynamic Ethics

Questions we ask today are very different from those we asked two decades ago. The concept of privacy changes every day. We are now talking about ownership of data. And Generation Z also teaches us how dynamic ethics is. Twenty years ago we asked without thinking about the gender of the person, young people born after 1995-2000 don’t like to be pigeonholed anymore. The gender question has now become an ethically charged theme. This confirms that ‘good’ and ‘evil’, and the much more interesting gray area, are subject to significant change. The midwife must therefore be able to continuously obtain new knowledge and insights.

Keep digging

So ethics for the it manager. One calls it a good marketing strategy. The other thinks it is sad that ethical behavior is only discussed after the thirtieth birthday of the internet. Yet another is hopeful that we are at the dawn of an era of humane technology. If we’re going to do it right. In April, the European Union announced a first move towards ethical guidelines. It is now also on Facebooks agenda. The company revealed during its developers conference that Instagram is experimenting with hiding the number of likes on posts to alleviate social pressure. And in this way wants to contribute to the digital wellbeing of users. There is clearly a lot of movement around digital ethics. This will also have to be done in the domain of the it manager. Socrates and his midwifery method are a good aid in this regard. What will your next question be?

Thijs Pepping is a trend analyst at SogetiLabs’ Research Institute (VINT)

 

ICT Magazine – GANs: Tooling of tomorrow (April 2019)

Published @ https://www.ictmagazine.nl/achter-het-nieuws/gans-tooling-van-morgen/

Only one of these people really exists. The rest is made by a GAN from Nvidia. Can the person of flesh and blood be recognized?

The portrait of a dimly painted man had been hanging at Christie’s in New York for a few days. At first glance it seemed to mean little. Yet the artwork had one unique feature. The maker was not a human being but a smart algorithm. The yield exceeded all predictions. The portrait of the fictional Edmond de Bellamy eventually sold for a whopping $432,500.

A painting made with the full use of Artificial Intelligence (AI) is of course food for discussion in the art world. This also applies to the impact on ICT. The portrait was created with the use of a revolutionary so-called Generative Adversarial Network (GAN). Producing content in all kinds of data forms can be much faster and easier with this. And the result also seems better and more creative. For example, advanced visualisations are hardly distinguishable from real photography. Yann LeCunn, Facebook’s AI director, calls GANs the coolest deep learning technology of the past twenty years. And the Chinese-English computer scientist Andrew Ng, known for Baidu and Google Brain, talks about fundamental progress thanks to GANs. This technology is likely to change the lives of many IT professionals.

As real as possible

AI authority Ian “TheGANfather” Goodfellow introduced this smart technology in 2014. They are two neural networks: One network is a ‘generator’ that generates the most real output possible. The other network is the ‘discriminator’ that assesses whether the data presented is genuine or counterfeit. One tries to fool the other. And the other doesn’t want to fall for that. In this way they provide each other with continuous feedback and thus form a successful variant of unsupervised learning. GANs are extremely suitable for making pictures. With the CycleGan, painting styles from masters such as Monet and Van Gogh are easily transferred to normal photos. And with Nvidia’s GauGAN, you can quickly and easily draw a photo-realistic landscape in a similar program to Microsoft Paint. It all seems to be very simple. A matter of entering data and photos, some training and then the most advanced digital content that software developers today work on for weeks, if not months.

A revolution in creativity

For example, Andrew Price, guru in the field of 3D, states that an average designer spends about 44 to 66 hours designing a virtual building. At an average hourly rate, a turnkey virtual home design costs about $3,900. The amount for a fully furnished street in a game then quickly rises to $ 200,000. Using GANs, the designer specifies desired parameters such as building height and the number of windows, after which the software generates various options itself. Thanks to the automatically generated variants, the designer no longer has to experiment endlessly. Whether it is a house or the design of a car, app, interface or any website, with GANs the development of the digital product becomes a lot easier.

And GANs go even further. For example, researchers at Michigan University have created a GAN that converts text into realistic photos. The sentence: ‘bird with yellow belly, black back, brown throat and black head’ is converted into a picture of a bird that is as realistic as possible. Who knows, GANs may develop in such a way that the app or the presentation image will soon be made with a few words.

synthetic data

What applies to images also applies to text or other types of data. For example, privacy-sensitive data can serve as input for a GAN with which synthetic data is generated. A synthetic dataset has two good qualities: it is absolutely representative of the original data and privacy protection is guaranteed. In this way, synthetic data can be the answer to issues surrounding data ownership and offering better and/or free services in exchange for personal data. For example, data generated by GANs can then be used for medical research. Companies can also monetize synthetic data that can be used to make predictions.

The tooling of tomorrow?

In short, the GAN promises are great. Not only in terms of speed, creativity and ease of use. The use of GANs also affects our approach to privacy protection and consent to data sharing. The big disadvantage is, of course, that generated content will soon be difficult or impossible to distinguish from real content. A cat-and-mouse game of fooling and recognizing that won’t stop anytime soon.

Many Artificial Intelligence researchers are convinced that GAN technology will continue to improve in the coming years. The big question is, of course, how much impact these developments have on our IT management and the relationship between man and machine in the development process. However, the strength of GANs is also their weakness. They make new creations that are in line with the input. A GAN trained with normal dog photos will not ‘come up’ with a dog on a bike all at once. ‘As stupid as a goose’ (Dutch saying, Gans = Goose in Dutch) will soon disappear from the dictionary, but for the time being it will be up to humans to color outside the lines and think outside the proverbial box.

Thijs Pepping is a trend analyst at the think tank of IT service provider Sogeti.

ICT Magazine – Social influencing has outgrown puberty (February 2019)

Published @ https://www.ictmagazine.nl/achter-het-nieuws/social-influencen-is-de-pubertijd-voorbij/

Social influencing has grown from a somewhat laughable digital phenomenon to a gigantic industry in which a lot of money is made. For example, the Economist calculated that influencers with about 100,000 followers can easily earn a few thousand dollars for one social media posting.

The younger generation doesn’t know any better. Following and being followed is the most natural thing in the world. It is high time you started to delve deeper into this phenomenon. Why? Not only because these digital attention grabbers are a major economic factor. They will also be your new colleagues. Moreover, it is already the target group with which marketers of your organization are in dialogue.

Bringing the world into motion

The power of the influence industry is evident in the documentary Fyre on Netflix. There you can see how influencers are able to mobilize a large audience in a very short time. A party that no one had heard of before was fully booked in two days. In exchange for hundreds of thousands of dollars, top models like Kendall Jenner, Bella Hadid and Elsa Hosk successfully promoted a new music festival on an idyllic island through a few Instagram posts and a slick video. The subtitle of the documentary ‘The greatest party that never happened’ summarizes the story nicely: the idea of a utopian festival was sketched up to the very last moment via social media.

The hype and mystery was immense. At the same time, any kind of organizational and communication talent was lacking. Visitors who had paid thousands of dollars for luxury villas had to sleep in discarded storm tents on rain-soaked mattresses. The high-quality culinary meals turned out to be nothing more than a cheese sandwich in real life. The hilarious tale is a mind-boggling example of the power of social influencers. But also a lesson in organizational strength. Thanks to smart influencer tactics, any company is able to mobilize masses of people. Influencers have outgrown their puberty. They just don’t really know what it takes in the physical world to make something work.

Mukbang

Another way to get acquainted with the world of digital attention grabbers is to do your own search through the most important fields where influencers reside. If you are having a meal alone one evening, type Mukbang into YouTube and turn up the volume a bit. In Mukbang videos, food is enjoyed visibly and with loud smacking sounds. Many videos have tens of millions of views. The motivation to watch differs from expelling loneliness, eating together, to forms of play such as enjoying an unhealthy meal through video while eating something healthy yourself. The influencers earn from the number of views, with advertisements or because viewers buy virtual gifts.

eSports

At Mukbang, the revenue mainly benefits the influencer and the platform. That is different with eSports, for example. A domain in which many parties around it earn money. Similar to the physical traditional sports world in which managers, sponsors, players, directors and merchandisers, among others, earn money. There is even a dark side with match fixing, doping, and gender related issues. Research firm Newzoo predicts that eSports revenue growth will continue from $1.17 billion in 2019 to $1.65 billion in 2021. More than 200 colleges and universities in the United States now offer scholarships for eSporters. And while the total viewership of the last Super Bowl final was about 101 million, the 2018 League of Legends eSports final saw more than 200 million fans watching their heroes battle it out.

Micro Influencers

The ordinary teenager can also earn money from his or her followers. In an interview with The Atlantic, a 13-year-old American teenager talks about her dissatisfaction with an old-fashioned job. It takes a lot of training and you sometimes have to go far from home to earn relatively little money. It seems like a lot of hassle compared to her new gig: “You don’t have to make more than a simple posting. That’s easily done. That one post can easily bring in about $50.” In a month, this young Micro Influencer has earned a few hundred dollars with her 8000 followers. Research also shows that 96.5 percent of everyone who wants to become a YouTube star does not generate enough income to rise above the poverty line. So the reality seems less rosy than is thought.

Bullshit detector

This year, 32 percent of the world’s population will belong to Generation Z, the successors to the millennials. The first of these are now entering the labor market. They have buying and stopping power and are able to shake things up in your organization. At least if you are able to interest these talents in your company. More and more organizations are using influencers in the ‘war for talent’ and our country now has numerous influencer agencies that help you on your way with your own influencer approach. From home brewed influencers from your own workforce to a perfect match with a YouTube personality. As in the documentary Fyre, the danger lies in the tension between promise and the delivery of promised performance. Authenticity seems to be the magic word. This generation judges and condemns you with their apparently razor-sharp bullshit detector.

Every generation brings new insights. The current new generation has the ability to influence in a way that you may still get used to, but for them it is the most normal thing in the world. So prepare for that and go further in your deep-dive than just applying some influencer tricks. Documentaries and YouTube channels abound.

Menno van Doorn and Thijs Pepping are both trend analysts at VINT, SogetiLab New Technology Exploration Institute.

 

Algemeen Dagblad – A virtual purposeful advisor on your smartphone (January 2019)

Published @ AD january 2019

 

It would be good to focus on how we can use our smartphone for the art of living, thinks Thijs Pepping. A virtual advisor can give us insight into the way we use our mobile phone.

Stop smoking, reduce alcohol and participate in the Strong Viking challenges. Most new years resolutions are now cliché. Those who are hip, announced a monthly digital detox at the start of the new year, newcomer to the list of lifestyle trends.

But being a few days away from the mobile is just as useless as a weekend in Paris to forget about work stress. This digital-drug control leads to yo-yo effects without addressing the problem. The smartphone has a lot of potential to enrich our lives, but at the moment it is a glorified slot machine. Digital detoxing clearly shows that we are not done yet: how to live well with the phone in your pocket? Of course, the definition of ‘the good life’ is different for everyone.

Yet ultimately we want human happiness, according to scientists. People want to experience meaning in life. Quite a job, with an extra challenge now that YouTube and Facebook are mainly focused on holding our attention. Regardless whether the time and attention spend contributes to our life goals.

As strange as it may sound, technology can help us in the philosophical sense of the art of living. If we are able to look at our own data with an open attitude, we gain insight into our behaviour and thoughts. Install a digital life coach and experiment with it. Such a virtual advisor confronts and reflects. The ‘Woebot’ checks every day how you are doing. And the ‘chatbot’ asks in-depth questions and confronts you. Another counselor is “Ellie,” a virtual therapist who helps veterans talk about their post-traumatic stress disorder. The more data these chatbots get, the more accurate they become.

Ethics and technology philosopher James Williams argues that technology should become a navigation system for life goals. For example, the system can advise you to exchange YouTube time for sports and games. But we are far from that point, a lot still has to change. First we need to ensure that all that manipulative computing power in the smartphone is also used for happiness and meaning.

A first simple step is to assess which apps demand our attention and for what purpose. After the inevitable apps cleanup, it’s time to see how we can actively use the smartphone in order to make a true piece of art of our life.

Thijs Pepping is a trend analyst at Sogeti’s New Technologies think tank (VINT).