Wise Owl Shopper Discounts

Artificial intelligence

Social (Net)Work: What can A.I. catch — and where does it fail miserably?

Criticism for hate speech, extremism, fake news, and other content that violates community standards has the largest social media networks strengthening policies, adding staff, and re-working algorithms. In the Social (Net)Work Series, we explore social media moderation, looking at what works and what doesn’t, while examining possibilities for improvement. From a video of a suicide victim on YouTube to ads targeting “Jew haters,” on Facebook, social media platforms are plagued by inappropriate content that manages to slip through the cracks.

In many cases, the platform’s response is to implement smarter algorithms to better identify inappropriate content. But what is artificial intelligence really capable of catching, how much should we trust it, and where does it fail miserably? “A.I. can pick up offensive language and it can recognize images very well.

The power of identifying the image is there,” says Winston Binch, the chief digital officer of Deutsch, a creative agency that uses A.I. in creating digital campaigns for brands from Target to Taco Bell. “The gray area becomes the intent.”

A.I. can read both text and images, but accuracy varies

Using natural language processing, A.I. can be trained to recognize text across multiple languages. A program designed to spot posts that violate community guidelines, for example, can be taught to detect racial slurs or terms associated with extremist propaganda.

A.I. can also be trained to recognize images, to prevent some forms of nudity or recognize symbols like the swastika. It works well in many cases, but it isn’t foolproof.

For example, Google Photos was criticized for tagging images of dark-skinned people with the keyword “gorilla.” Years later, Google still hasn’t found a solution for the problem, instead choosing to remove the program’s ability to tag monkeys and gorillas entirely. Algorithms also need to be updated as a word’s meaning evolves, or to understand how a word is used in context. For example, LGBT Twitter users recently noticed a lack of search results for #gay and #bisexual, among other terms, leading some to feel the service was censoring them.

Twitter apologized for the error, blaming it on an outdated algorithm that was falsely identifying posts tagged with the terms as potentially offensive. Twitter said its algorithm was supposed to consider the term in the context of the post, but had failed to do so with those keywords.

A.I. is biased

The gorilla tagging fail brings up another important shortcoming — A.I. is biased. You might wonder how a computer could possibly be biased, but A.I. is trained by watching people complete tasks, or by inputting the results of those tasks.

For example, programs to identify objects in a photograph are often trained by feeding the system thousands of images that were initially tagged by hand. The human element is what makes it possible for A.I. to do tasks but at the same time gives it human bias. The human element is what makes it possible for A.I. to complete tasks previously impossible on typical software, but that same human element also inadvertently gives human bias to a computer.

An A.I. program is only as good as the training data — if the system was largely fed images of white males, for example, the program will have difficulty identifying people with other skin tones. “One shortcoming of A.I., in general, when it comes to moderating anything from comments to user content, is that it’s inherently opinionated by design,” said PJ Adelberg, the executive technical director of Stink Studios New York, an agency that uses A.I. for creating social media bots and moderating brand campaigns. Once a training set is developed, that data is often shared among developers, which means the bias spreads to multiple programs.

Adelberg says that factor means developers are unable to modify those data sets in programs using multiple A.I. systems, making it difficult to remove any biases after discovering them.

A.I. cannot determine intent

A.I. can detect a swastika in a photograph — but the software cannot determine how it is being used. Facebook, for example, recently apologized after removing a post that contained a swastika but was accompanied by a text plea to stop the spread of hate. This is an example of the failure of A.I. to recognize intent.

Facebook even tagged a picture of the statue of Neptune as sexually explicit. Additionally, algorithms may unintentionally flag photojournalistic work because of hate symbols or violence that may appear in the images. Historic images shared for educational purposes are another example — in 2016, Facebook caused a controversy after it removed the historic “napalm girl” photograph multiple times before pressure from users forced the company to change its hardline stance on nudity and reinstate the photo.

A.I. tends to serve as an initial screening, but human moderators are often still needed to determine if the content actually violates community standards. Despite improvements to A.I., this isn’t a fact that is changing. Facebook, for example, is increasing the size of its review team to 20,000 this year, double last year’s count.

A.I. is helping humans work faster

A human brain may still be required, but A.I. has made the process more efficient.

A.I. can help determine which posts require a human review, as well as help prioritize those posts. In 2017, Facebook shared that A.I. designed to spot suicidal tendencies had resulted in 100 calls to emergency responders in one month. At the time, Facebook said that the A.I. was also helping determine which posts see a human reviewer first.

Getty Images/Blackzheep

“[A.I. has] come a long way and its definitely making progress, but the reality is you still very much need a human element verifying that you are modifying the right words, the right content, and the right message,” said Chris Mele, the managing director at Stink Studios. “Where it feels A.I. is working best is facilitating human moderators and helping them work faster and on a larger scale.

I don’t think A.I. is anywhere near being 100 percent automated on any platform.”

A.I. is fast, but the ethics are slow

Technology, in general, tends to grow at a rate faster than laws and ethics can keep up — and social media moderation is no exception.

Binch suggests that that factor could mean an increased demand for employees with a background in humanities or ethics, something most programmers don’t have.

As he put it, “We’re at a place now where the pace, the speed, is so fast, that we need to make sure the ethical component doesn’t drag too far behind.”

Editors’ Recommendations

These are the phone trends that will dominate 2018

The last 12 months saw the release of some of the most desirable smartphones ever conceived. From Samsung’s curved Galaxy S8 to Apple’s face-tracking iPhone X, to Google’s astonishing Pixel 2 camera smarts — 2017 has been a very good year for smartphones. But there’s always more ahead.

As we stride forth into 2018, we’ve been speculating about what the next dozen months might have in store for us. There will be further refinements, mass adoptions of certain trends, and a whole new batch of tempting handsets. Here’s what we expect to see this year in smartphones.

Under-glass fingerprint sensors

One of the strongest mobile trends in 2017 was the shift towards bezel-less phones with screens that span from edge to edge.

Coupled with a fresh 18:9 aspect ratio, which we expect will be standard from now on, this allowed manufacturers to pack more screen into devices that could still be used one-handed. But this trend necessitated the displacement of the fingerprint sensor to the back of the phone, or, in the case of Apple’s iPhone X, its disappearance. There’s another solution – manufacturers could put the fingerprint sensor under the display.

We’ve seen this kind of technology from Qualcomm and Synaptics, and both Apple and Samsung have been the subject of related rumors. We expect at least one top-tier smartphone maker to roll it out in a 2018 flagship. If it works well enough, wider adoption is sure to follow.

Facial unlocking tech

Apple wasn’t first with the idea of Face ID, but it did improve on existing facial recognition tech in phones with biometric authentication that’s secure, fast, and usable in a variety of lighting conditions.

We’re not sure if facial recognition is the new security standard, but we are sure that we’re set to see more of it this year. Many manufacturers offer some form of facial recognition already, though it’s generally not as secure as Apple’s Face ID. Samsung also offers iris scanning, which is more secure, but not as fast and convenient to use.

Elijah Nouvelage/Getty Images

It seems likely Apple will roll Face ID out in more iPhones this year, and we think all the top manufacturers will feel compelled to follow suit with something similar.

As more people use it, facial unlocking tech should improve, but we think it will remain one of several biometrics on offer, rather than the only one.

More augmented reality baked into your phone

Augmented reality has been around for a long time, but we’ve yet to see a truly killer app. With increasingly powerful smartphones and big plays in the form of Google’s ARCore and Apple’s ARKit, that could change in 2018. There are some fun ARKit apps on iOS already, and Google Lens shows how AR might merge with the artificially-intelligent assistant in your phone.

We think there’s plenty more augmentation to come, and AR could take off in a big way this year.

Dual cameras on every phone

For anyone who loves to be able to zoom in on their subject, or achieve a blurred background bokeh effect that emulates a DSLR, dual cameras have delivered in 2017. We’ve seen great examples in devices like Samsung’s Galaxy Note 8, Apple’s iPhone X, and the Huawei Mate 10 Pro. A dual camera is fast-becoming an expectation, and not one that’s confined to the flagship fleet, as evidenced by the Moto G5S Plus with its dual 13-megapixel snappers.

A dual camera is fast-becoming an expectation We expect to see dual cameras of varying quality in a host of smartphones this year, although we don’t feel they’re essential – the single-lens Pixel 2 XL is our current pick for the best smartphone camera.

Wireless charging becoming a staple

We’ve long enjoyed the advantages of wireless charging, not least the ability to pop your phone on the bedside table in the dark without any fumbling with cables and have it fully charged the next morning. Now that the top two smartphone manufacturers, Samsung and Apple, have embraced it, we think wireless charging will become a standard expectation.

We’re also starting to see many more great options for wireless charging pads. Even more exciting is the prospect of wireless charging across distance. We’ve seen a few different technologies pursuing this over the last few years.

Could 2018 be the year that we finally see a working example in a mainstream phone? Probably not, but we can hope.

Artificial intelligence could make life easier

Google’s Pixel 2 or 2 XL represent the current pinnacle of software smarts, with artificial intelligence lending a hand to create better photos, recognize objects, and help you schedule your day-to-day. We’re also seeing a big AI play from Huawei with the dedicated Neural Processing Unit (NPU) in its proprietary Kirin chip, which it hopes will help us to make mundane unconscious decisions so we can get on with our lives.

Amazon is also trying get Alexa into as many smartphones as possible.

It can be tricky to cut through the hyperbole with AI, but there is real potential here, and it’s something that every smartphone manufacturer is working on. We’re sold on the possibilities, but we hope to see more concrete examples of AI in our phones actually benefitting us in 2018.

Foldable displays

The concept of a foldable smartphone has been around for a few years now. What if our regular smartphone could fold out to the size of a small tablet?

Or maybe people would like a phone that folds down like an old clamshell for greater portability. Thanks to some patent filings, rumors suggest the Samsung Galaxy X may include this feature, but we don’t think you should hold your breath. Developments in foldable displays could well enable new designs and shapes, and greater durability, but we’re not really expecting a flurry of foldable phones in 2018.

Towards the end of 2017, the ZTE Axon M took a step in this direction by combining two 5.2-inch displays with a hinge, but it failed to impress and didn’t feel useful. An actual folding display would surely look better, but is it something we need? We’re not convinced that there’s a compelling reason for a device like this, and if we do see one in 2018 it’s likely to be a novelty.

Better batteries

We hope that battery life will increase every year, but all too often efficiency gains are squandered by increasingly svelte designs.

One area where battery tech in smartphones has notably improved is the speed of charging, and we think that will continue in 2018.

There’s no end of exciting research into how to squeeze more out of lithium-ion batteries, or replace them with something superior, but we’ve been to enough rodeos now to know better than to predict it will happen this year.

Editors’ Recommendations

Truly creative A.I. is just around the corner. Here’s why that’s a big deal

Joe Kennedy, father of the late President John F. Kennedy, once said that, when shoeshine boys start giving you stock tips, the financial bubble is getting too big for its own good.

By that same logic, when Hollywood actors start tweeting about a once-obscure part of artificial intelligence (A.I.), you know that something big is happening, too. That’s exactly what occurred recently when Zach Braff, the actor-director still best known for his performance as J.D. on the medical comedy series Scrubs, recorded himself reading a Scrubs-style monolog written by an A.I.

“What is a hospital?” Braff reads, adopting the thoughtful tone J.D. used to wrap up each episode in the series. “A hospital is a lot like a high school: the most amazing man is dying, and you’re the only one who wants to steal stuff from his dad. Being in a hospital is a lot like being in a sorority. You have greasers and surgeons.

And even though it sucks about Doctor Tapioca, not even that’s sad.” Today’s machine creativity typically involves humans making some of the decisions Yes, it’s nonsense — but it’s charming nonsense.

Created by Botnik Studios, who recently used the same same statistical predictive tools to write an equally bonkers new Harry Potter story, the A.I. mimics the writing style of the show’s real scripts. It sounds right enough to be recognizable but wrong enough to be obviously the work of a silly machine, like the classic anecdote about the early MIT machine translation software which translated the Biblical saying “The spirit is willing, but the flesh is weak” into Russian and back again, ending up with “The whisky is strong, but the meat is rotten.” As Braff’s publicizing of the Scrubs-bot shows, the topic of computational creativity is very much in right now.

Once the domain of a few lonely researchers, trapped on the fringes of computer science and the liberal arts, the question of whether a machine can be creative is everywhere. Alongside Botnik’s attempts at Harry Potter and Scrubs, we’ve recently written about a recurrent neural network (RNN) that took a stab at writing the sixth novel in the Song of Ice and Fire series, better known to TV fans as Game of Thrones. The RNN was trained for its task by reading and analyzing the roughly 5,000 pages of existing novels in the series.

Larger companies like Google have gotten in on the act, too, with its Deep Dream project, which purposely magnifies some of the recognition errors in Google’s deep learning neural networks to create wonderfully trippy effects.

[embedded content]

Right now, we’re at the “laughter” stage of computational creativity for the most part. That doesn’t have to mean outright mocking A.I.’s attempts to create, but it’s extremely unlikely that, say, an image generated by Google’s Deep Dream will hang in an art gallery any time soon — even if the same image painted by a person may be taken more seriously. It’s fair to point out that today’s machine creativity typically involves humans making some of the decisions, but the credit isn’t split between both in the same way that a movie written by two authors would be.

Rightly or wrongly, we give A.I. the same amount of credit in these scenarios that we might give to the typewriter that “War and Peace” was written on. In other words, very little. Right now, we’re in the “laughter” stage of AI creativity, but that may change soon.

But that could change very soon. Because computational creativity is doing a whole lot more than generating funny memes and writing parody scripts. NASA, for example, has employed evolutionary algorithms, which mimic natural selection in machine form, to design satellite components.

These components work well — although their human “creators” are at a loss to explain exactly how. Legal firms, meanwhile, are using A.I. to formulate and hone new arguments and interpretations of the law, which could be useful in a courtroom. In medicine, the U.K.’s University of Manchester is using a robot called EVE to formulate hypotheses for future drugs, devise experiments to test these theories, physically carry out these experiments, and then interpret the results.

IBM’s “Chef Watson” utilizes A.I. to generate its own unique cooking recipes, based on a knowledge of 9,000 existing dishes and an awareness of which chemical compounds work well together. The results are things like Turkish-Korean Caesar salads and Cuban lobster bouillabaisse that no human chef would ever come up with, but which taste good nevertheless. In another domain, video game developers Epic Stars recently used a deep learning A.I. to compose the main theme for its new game Pixelfield, which was then performed by a live orchestra.

[embedded content]

Finally, newspapers like the Washington Post are eschewing sending human reporters to cover events like the Olympics, in place of letting machines do the job.

To date, the newspaper’s robo-journalist has written close to 1,000 articles. Which brings us to our big point: Should a machine’s ability to be creative serve as the ultimate benchmark for machine intelligence? Here in 2017, brain-inspired neural networks are getting bigger, better, and more complicated all the time, but we still don’t have an obvious test to discern when a machine is finally considered intelligent.

We still don’t have definitive method for discerning when a machine is intelligent. While it’s not a serious concern of most A.I. researchers, the most famous test of machine intelligence remains the Turing Test, which suggests that if a machine is able to fool us into thinking it’s intelligent, we must therefore agree that it is intelligent. The result, unfortunately, is that machine intelligence is reduced to the level of an illusionist’s trick — attempting to pull the wool over the audience’s eyes rather than actually demonstrating that a computer can have a mind.

An alternative approach is an idea called the Lovelace Test, named after the pioneering computer programmer Ada Lovelace. Appropriately enough, Ada Lovelace represented the intersection of creativity and computation — being the daughter of the Romantic poet Lord Byron, as well as working alongside Charles Babbage on his ill-fated Analytical Engine in the 1800s. Ada Lovelace was impressed by the idea of building the Analytical Engine, but argued that it would never be considered capable of true thinking, since it was only able to carry out pre-programmed instructions.

As she said, “The Analytical Engine has no pretensions whatever to originate anything,’ she famously wrote. ‘It can do [only] whatever we know how to order it to perform.” The broad idea of the Lovelace Test involves three separate parts: the human creator, the machine component, and the original idea. The test is passed only if the machine component is able to generate an original idea, without the human creator being able to explain exactly how this has been achieved.

At that point, it is assumed that a computer has come up with a spontaneous creative thought. Mark Riedl, an associate professor of interactive computing at Georgia Tech, has proposed a modification of the test in which certain constraints are given — such as “create a story in which a boy falls in love with a girl, aliens abduct the boy, and the girl saves the world with the help of a talking cat.” “Where I think the Lovelace 2.0 test plays a role is verifying that novel creation by a computational system is not accidental,” Riedl told Digital Trends. “The test requires understanding of what is being asked, and understanding of the semantics of the data it is drawing from.” It’s an intriguing thought experiment.

This benchmark may be one that artificial intelligence has not yet cracked, but surely it’s getting closer all the time. When machines can create patentable technologies, dream up useful hypotheses, and potentially one day write movie scripts that will sell tickets to paying audiences, it’s difficult to call their insights accidental. To coin a phrase often attributed to Mahatma Gandhi, “First they ignore you, then they laugh at you, then they fight you, then you win.” Computational creativity has been ignored.

Right now, either fondly or maliciously, it is being laughed at. Next it will start fighting our preconceptions — such as the kinds of jobs which qualify as creative, which are the roles we are frequently assured are safe from automation. And after that?

Just maybe it can win.

Editors’ Recommendations

Instead of stealing jobs, what if A.I. just tells us how to do them better?

In the early part of the twentieth century, a management consultant and mechanical engineer named Frederick Taylor wrote a book, titled The Principles of Scientific Management. Workplace inefficiency, Taylor’s book argued, was one of the greatest crimes in America; robbing both workers and employers alike of achieving the levels of prosperity they deserved. For example, Taylor noted the “deliberate loafing” the bricklayers’ union of the time forced on its workers by limiting them to just 275 bricks per day when working on a city contract, and 375 per day on private work.

Taylor had other ideas. In the interests of efficiency he believed that every single act performed by a workforce could be modified and modified to make it more efficient, “as though it were a physical law like the Law of Gravity.” Others took up Taylor’s dream of an efficient, almost mechanised workforce.

Contemporaries Frank and Lillian Gilbreth studied the science of bricklaying, introducing ambidexterity and special scaffolds designed to reduce lifting. The optimal number of motions bricklayers were told to perform was pared down to between two and five depending on the job, and new measures were introduced to keep track of the number of bricks an individual laid — to both incentivize workers and reduce wastage. It’s now possible to offer workers real-time feedback in a way that no human manager ever could.

Like many management theories, Taylorism had its moment in the sun, before being replaced. Today, however, its fundamental ideas are enjoying a surprising resurgence. Aided by the plethora of smart sensors and the latest advances in artificial intelligence, it’s now possible to monitor workers more closely than ever, and offer them real-time feedback in a way that no (human) manager ever could.

A recent study from the University of Waterloo showed how motion sensors and A.I. can be used to extract insights from expert bricklayers by equipping them with sensor suits while they worked to build a concrete wall. The study discovered that master masons don’t necessarily follow the standard ergonomic rules taught to novices. Instead, they employ movements (such as swinging, rather than lifting, blocks) that enable them to work twice as fast with half the effort.

“As we all know, [an] ageing workforce is a threat to the national economy,” researcher Abdullatif Alwasel told Digital Trends. “In highly physical work, such as masonry, the problem lies in the nature of work. Masonry is highly physical and repetitive work: two major factors that are known to cause musculoskeletal injuries. However, when this kind of work is done in an ergonomically safe way, it doesn’t cause injuries.

This is apparent through the percentage of injuries in expert workers versus novice or less experienced workers. [Our team’s work] work looks at using A.I. to extract safe postures that expert workers use to perform work safely and effectively as a first step towards creating a training tool for novice workers to graduate safe and effective masons and to decrease the number of injuries in the trade.”

Alwasel describes the team’s current work as a “first step.” By the end of the project, however, they hope to be able to develop a real-time feedback system which alerts workers whenever they use the wrong posture. Thanks to the miniaturization of components, it’s not out of the question that such a sensor suit could one day be used on construction sites across America. As with Taylor’s dream, both workers and employers will benefit from the enhanced levels of efficiency.

“Our next step is to find out whether the concept of expert safe workers applies to other trades that have similar situation,” Alwasel said. “I think commercialization is a final step that has to be done to make use of this technology and we are looking for ways to do that.”

Objects that nudge back

It should be noted, however, that the classical concept of Taylorism is not always viewed entirely favorably. Critics point out that it robbed individuals of their autonomy, that it made jobs more rote and repetitive, that it could adversely affect workers’ wellbeing by causing them to over-speed, and that it assumed speed and efficiency was the ultimate goal of… well, everything really. As with so much of modern technology, a lot depends on what we gain versus what we lose.

It’s difficult to criticize a project like the University of Waterloo’s, which is focused on reducing injuries among the workforce. However, this same neo-Taylorist approach can be seen throughout the tech sector. In Amazon’s warehouses, product pickers (or “fulfillment associates”) are given handheld devices, which reveal where individual products are located and, via a routing algorithm, tell them the shortest possible journey to get there.

However, they also collect constant, real-time streams of data concerning how fast employees walk and complete individual orders, thereby quantifying productivity. Quoted in an article for the Daily Mail, a warehouse manager described workers as, “sort of like a robot, but in human form.” Similar technology is increasingly used in warehouses (not just Amazon’s) around the world. It’s not just Amazon, either.

A company called CourseSmart creates study aids that allow teachers to see whether their students are skipping pages in their textbooks, failing to highlight passages or take notes, or plain not studying. This information — even when it concerns out-of-lesson time for students, can be fed back to teachers. A university’s school of business dean described the service to the New York Times as, “Big Brother, sort of, but with a good intent.” The idea is to find out exactly what practices produce good students, and then to nudge them toward it.

These “nudges” form an increasingly large part of our lives. Rather than the subtle nudges of previous “dumb” objects (for example, the disposability of a plastic cup, which starts disintegrating after a few uses and therefore encourages you to throw it away), today’s smart technology means that we can be given constant feedback on everything from our posture to which route to take to the bathroom for a quicker toilet break to how best to study. Autonomous technology challenges the autonomy of individuals.

Whether that’s a bad thing or not depends a whole lot on your perspective. In Sarah Conly’s Against Autonomy, the author argues that we should “save people from themselves.” It’s part of a larger argument that may begin with technology to modify how you work, continue to the banning of cigarettes and excessively sized meals, and maybe even extend to spending too much of your paycheck without making the proper savings.

There are no easy answers here. As with so much of modern technology (news feeds that show us only articles they think will be of interest, smart speakers in the home, user data exchanged for “free” services, etc.), a lot depends on what we gain versus what we lose. We might be very willing to have a smart exoskeleton that tells us how not to damage our backs when lifting heavy bricks.

We may be less so if we feel that our humanity is minimized by the neverending push toward efficiency. What’s not in question is whether the tools now exist to help make this neo-Taylorism a reality. They most certainly do.

Now we need to work out how best to use them.

To paraphrase the chaos theory mathematician Dr.

Ian Malcolm (also known as Jeff Goldblum’s character in Jurassic Park), we’ve been so preoccupied with whether or not we could achieve these things, we haven’t necessarily thought enough about whether we should.

Editors’ Recommendations

Pepper is everywhere in Japan, and nobody cares. Should we feel bad for robots?

Whether it’s robots or smartphones, AI or premium audio products, Japan has always been at the forefront of any conversation about technology. We recently spent several weeks in Tokyo discovering not only what some of the biggest names in new tech are creating, but also taking advantage of the exciting location to test out the best smartphone cameras, and discover the charm of its popular tech-tourism destinations. Make sure to check out other entries in our series “Modern Japan.”

Pepper the robot has consistently hit headlines since its introduction several years ago. The humanoid robot is a surprisingly regular sight in Japan, despite being a rarity in the U.S. and Europe. Created by mobile technology mega-corp SoftBank, we were pleased to see Pepper on duty not only in SoftBank’s many stores, but helping out the public in other places too.

Watching Pepper out in public gave us a glimpse of a joyous robot-filled future. Sharing space with an actual, functioning robot on an almost daily basis was like our wildest sci-fi dreams come true. Pepper is the real deal. It carries out increasingly complex tasks, can verbally interact with you, and can even read you facial expressions to judge how you’re feeling.

It has been employed in various jobs over the years, from being a priest to selling phones. Watching Pepper out in public gave us a glimpse of a joyous robot-filled future, but also grimly reminded us about the danger of instilling automations with personality — real or imagined. It gives them the power to play with our emotions, as it’s worryingly easy to forget they’re not human.

Should this put us off chatting, and potentially caring about, robots?

Pepper’s willing to help

Pepper first greeted us at the monorail station at Haneda airport in Tokyo, where it was dressed in a stationmaster’s cap and jacket, providing information to anyone who stopped to ask. We were in a hurry, just like everyone else around Pepper, and an endless stream of people moved past the ‘bot, as it moved around in the crowds. We didn’t see anyone stop and chat.

Andy Boxall/Digital Trends

The next time we came across Pepper was in a high-end karaoke establishment in Roppingi, where the robot was greeting eager singers as they entered.

No doubt Pepper was willing to provide more information if asked, but not only did no-one stop, they didn’t even acknowledge the robot when they walked past. Pepper can be found everywhere from inside banks to hanging out in colleges. In conversations with friends, we were told about Pepper hanging out in dentists offices and at golf courses.

It’s probably not performing any oral surgery though. Pepper can be found everywhere from inside banks to hanging out in colleges. SoftBank stores commonly have a Pepper on duty.

In one large store near Shibuya, three Pepper robots lined up just inside the entrance, all displaying text on the chest-mounted screens. Eyes eagerly scanning faces, all the Pepper robots made eye contact and sometimes calmly spoke Japanese to anyone walking in, but they were largely ignored. Over the next few weeks, we spotted Pepper often staring out from behind store windows, looking at the world passing it by.

It was a rather lonely, expectant sight, like a child with its face pressed up against a window, waiting for a parent to return from a long day at work. That was sad enough, but worse was seeing Pepper powered down. Its head tilts towards the ground, its arms hang limply by its side, and the screen shows nothing more than a black void.

Motionless, Pepper stands in the corner like just another piece of electronic equipment, waiting until it’s needed again.

Acceptance, or dismissal?

We were seeing Pepper being largely ignored. Why? No, it isn’t really a “new” thing anymore, having rolled around Tokyo for a while; but it’s still a robot.

Even in Japan, where robotics is popular enough to be a mainstream hobby, could everyone have gotten used to seeing Pepper that it just blended into the background? Or, are people still too nervous to talk to a robot?

[embedded content]

Either way, eager-to-please Pepper wasn’t really performing its robot-duties, and it was impossible not to feel a little sorry for it. Whether it was waiting for us to go into its store, or keen to impart information to the lost or confused, it didn’t seem like Pepper was leading a very fulfilling life.

We wondered that if Pepper’s working day was filled with human interaction, and the feeling of a job-well-done, then maybe it wouldn’t look so downcast when sleeping? Why do we care so much? What does it matter?

After all, Pepper doesn’t have feelings. We humans have feelings though, and it’s very easy to project them on the already anthropomorphized creation — it takes on a vaguely humanoid form, with a cute, human-like face. Imagining it being pretty disappointed at the end of the day, when it hasn’t helped as many people as it had hoped, is a sad thought.

Happy to talk

There are plenty of people who do chat to Pepper and really enjoy it.

We spoke to Tokyo resident Erina Takasu, who told us about several enjoyable experiences with Pepper. “Pepper was rare until a few years ago, but recently it has been introduced in more places,” Takasu told Digital Trends.

Andy Boxall/Digital Trends

“I have talked to Pepper at an amusement arcade, where it was doing an impression of popular Japanese comedians,” Takasu said. “Also at a souvenir shop, Pepper asked me some questions, and introduced recommended items.” She also said how popular Pepper is with children, something we heard from several other people used to observing the robot. “In a shopping mall, I have seen Pepper asking simple math questions and playing with children,” she recalled. “Children especially seem to like Pepper,” Takasu told us when asked about how people reacted to Pepper, adding, “Most adults are shy, but I’m not, so I want to talk to him.”

Pepper, and other robots, need love and attention

Notice the use of the word, “him?” Strictly, Pepper is an “it,” but like us and likely many others, Takasu has humanized the robot.

Whether this is a bad thing or not may depend on your viewpoint. To us, it’s not much different to calling a car or boat, “she,” or imagining your pet making rationally thought-out, conscious decisions. Next time you see Pepper, stop and chat.

We’re going to make an impassioned plea. Next time you see Pepper, stop and chat. Even if you can only manage a simple hello in Japanese, give it a try.

If you find a rare Pepper that speaks English, or your native language , see what Pepper wants to tell you. In the near future, more robots will move around among us, all ready to provide undivided attention, friendly conversation, and all kinds of handy tips. Artificial intelligence is the big tech boom of the moment, and companies are already using digital personalities to make virtual assistants friendlier and more natural to talk to.

It’s only a short leap to instilling robots with artificial emotions, at which time may actually care if it provided the service it was designed to perform

We’ve got to get used to talking to them, and it’d be lovely if they went to sleep happy at the end of the day too.

Editors’ Recommendations