Wise Owl Shopper Discounts

ai

Deep learning vs. machine learning: Demystifying artificial intelligence

In recent months, Microsoft, Google, Apple, Facebook, and other entities have declared that we no longer live in a mobile-first world. Instead, it’s an artificial intelligence-first world where digital assistants and other services will be your primary source of information and getting tasks done. Your typical smartphone or PC are now your secondary go-getters.

Backing this new frontier are two terms you’ll likely hear often: machine learning and deep learning. These are two methods in “teaching” artificial intelligence to perform tasks, but their uses goes way beyond creating smart assistants. What’s the difference?

Here’s a quick breakdown. Computers now see, hear, and speak With the help of machine learning, computers can now be “trained” to predict the weather, determine stock market outcomes, understand your shopping habits, control robots in a factory, and so on.

Google, Amazon, Facebook, Netflix, LinkedIn, and more popular consumer-facing services are all backed by machine learning. But at the heart of all this learning is what’s known as an algorithm. Simply put, an algorithm is not a complete computer program (a set of instructions), but a limited sequence of steps to solve a single problem.

For example, a search engine relies on an algorithm that grabs the text you enter into the search field box, and searches the connected database to provide the related search results. It takes specific steps to achieve a single, specific goal. Machine learning has actually been around since 1956. Arthur Samuel didn’t want to write a highly-detailed, lengthy program that could enable a computer to beat him in a game of checkers.

Instead, he created an algorithm that enabled the computer to play against itself thousands of times so it could “learn” how to perform as a stand-alone opponent. By 1962, this computer beat the Connecticut state champion. Thus, at its core, machine learning is based on trial and error.

We can’t manually write a program by hand that can help a self-driving car distinguish a pedestrian from a tree or a vehicle, but we can create an algorithm for a program that can solve this problem using data. Algorithms can also be created to help programs predict the path of a hurricane, diagnose Alzheimer’s early, determine the world’s most overpaid and underpaid soccer stars, and so on. Machine learning typically runs on low-end devices, and breaks a problem down into parts.

Each part is solved in order, and then combined to create a single answer to the problem. Well-known machine learning contributor Tom Mitchell of Carnegie Mellon University explains that computer programs are “learning” from experience if their performance of a specific task is improving. Machine learning algorithms are essentially enabling programs to make predictions, and over time get better at these predictions based on trial and error experience.

Here are the four main types of machine learning: Supervised machine learning In this scenario, you are providing a computer program with labeled data.

For instance, if the assigned task is to separate pictures of boys and girls using an algorithm for sorting images, those with a male child would have a “boy” label, and images with a female child would have a “girl” label. This is considered as a “training” dataset, and the labels remain in place until the program can successfully sort the images at an acceptable rate. Semi-supervised machine learning

In this case, only a few images are labeled. The computer program will then use an algorithm to make its best guess regarding the unlabeled images, and then the data is fed back to the program as training data. A new batch of images is then provided, with only a few sporting labels.

It’s a repetitive process until the program can distinguish between boys and girls at an acceptable rate. Unsupervised machine learning This type of machine learning doesn’t involve labels whatsoever.

Instead, the program is blindly thrown into the task of splitting images of boys and girls into two groups using one of two methods. One algorithm is called “clustering” that groups similar objects together based on characteristics, such as hair length, jaw size, eye placement, and so on. The other algorithm is called “association” where the program creates if/then rules based on similarities it discovers.

In other words, it determines a common pattern between the images, and sorts them accordingly. Reinforcement machine learning Chess would be an excellent example of this type of algorithm.

The program knows the rules of the game and how to play, and goes through the steps to complete the round. The only information provided to the program is whether it won or lost the match. It continues to replay the game, keeping track of its successful moves, until it finally wins a match.

Now it’s time to move on to a deeper subject: deep learning.

Twitter is using A.I. to ditch those awful auto-cropped photos

The Twitter auto crop feature functions like a tweet’s character limit in order to keep images on the microblogging platform consistent with the rest of the feed — but now Twitter is getting better at those crops, thanks to artificial intelligence. Twitter is now rolling out a smarter auto crop based on neural networks, the company announced in a blog post on January 24. The previous auto crop feature worked by using face detection to keep faces in the frame.

When no faces were detected in the image, the software would simply crop the preview at the center, while a click on the image allowing users to see the entire shot. Twitter says the crop option without faces would often lead to awkward crops, while sometimes the software didn’t correctly identify faces. To fix those awkwardly cropped previews, Twitter engineers used what’s called salient image maps to train a neural network.

Salient maps use eye trackers to determine the areas of an image that most catch the viewer’s eye. Earlier research in the area showed that viewers tend to focus on faces, text, animals, objects, and areas with high contrast. Twitter used that earlier data to train the program to understand which areas of the image are the most important.

Using that data, the program can recognize those features and make that auto crop in a place that will leave the most visual areas inside the crop. But Twitter wasn’t done — while saliency software works well, it’s also slow, which would have prevented tweets from being posted in real time. To solve the awkward crops problem without a slowdown, Twitter refined the program again using two different techniques that improved the speed tenfold.

The first trained a smaller network using that first good but slow program in order to speed up those crops. Next, the software engineers determined a number of visual points to map on each image, effectively removing the smaller, less important visual cues while keeping the largest areas intact.

The resulting software allows images to post in real time, but with better crops. In a group of before and after pictures, Twitter shows images with faces that the earlier system wouldn’t detect properly cropped to face rather than feet.

Other examples show images of objects that were cut out in the first program because they didn’t sit in the middle of the image, but were more appropriately cropped using the updated algorithms.

Another example shows the program recognizing text and adjusting the crop to include a sign.

The updated cropping algorithm is already rolling out globally on both iOS and Android apps as well as Twitter.com.

Editors’ Recommendations

I played ping pong against a giant robot, and it was awesome

In a lot of ways, CES 2018 was the year of the unexpected. Nobody expected the torrential downpour that flooded the convention center. Nobody expected the power outage that left thousands of attendees in the dark.

And personally, I never expected to check “play ping pong against a robot” off my bucket list — but that’s exactly what happened at CES this year. The robot, which was created by an industrial automation company called Omron, was designed to showcase the company’s robotics and artificial intelligence technology. Here’s how it works: After you serve the ball, the robot (known as Forpheus) uses cameras and machine vision algorithms to track the ball and predict its trajectory.

The robot then uses its robotic arms to swing the paddle and hit the ball back to you. This all happens in real time. When I finally got my chance to square off against the bot, I was ready for an epic “man-vs-machine” battle royale — but much to my surprise, that’s not actually what it’s designed for.

Forpheus is intended to be cooperative rather than adversarial, so instead of spiking the ball back at you and racking up points, it tries to keep a volley going.

Drew Prindle/Digital Trends

Omron describes it as a coach of sorts. The system automatically adjusts to your skill level, and then gradually scales up the difficulty as you play — thereby pushing you to improve. It can even read your facial expressions.

If you’re struggling and getting discouraged, the system will give you words of encouragement and try to keep you from giving up. Still, despite Forpheus’s cooperative play style, I couldn’t resist the urge to score on it. After a couple friendly volleys, I kicked things up a notch and started hitting harder, lower-angle shots.

Forpheus returned them with ease, so I turned up the heat a little more and threw him a short lob. It didn’t even faze him. The robot seemed infallible, and I was beginning to lose hope, but I had one more trick up my sleeve.

On the next volley, I fired off a high-velocity spin shot, and ol’ Forphy had no idea what hit him.

The ball curved through the air and cut hard after the bounce — something that the system just wasn’t prepared for.

Turns out robotic arms and A.I. are no match for my five years of ping pong practice in the DT breakroom.

Editors’ Recommendations

Truly creative A.I. is just around the corner. Here’s why that’s a big deal

Joe Kennedy, father of the late President John F. Kennedy, once said that, when shoeshine boys start giving you stock tips, the financial bubble is getting too big for its own good.

By that same logic, when Hollywood actors start tweeting about a once-obscure part of artificial intelligence (A.I.), you know that something big is happening, too. That’s exactly what occurred recently when Zach Braff, the actor-director still best known for his performance as J.D. on the medical comedy series Scrubs, recorded himself reading a Scrubs-style monolog written by an A.I.

“What is a hospital?” Braff reads, adopting the thoughtful tone J.D. used to wrap up each episode in the series. “A hospital is a lot like a high school: the most amazing man is dying, and you’re the only one who wants to steal stuff from his dad. Being in a hospital is a lot like being in a sorority. You have greasers and surgeons.

And even though it sucks about Doctor Tapioca, not even that’s sad.” Today’s machine creativity typically involves humans making some of the decisions Yes, it’s nonsense — but it’s charming nonsense.

Created by Botnik Studios, who recently used the same same statistical predictive tools to write an equally bonkers new Harry Potter story, the A.I. mimics the writing style of the show’s real scripts. It sounds right enough to be recognizable but wrong enough to be obviously the work of a silly machine, like the classic anecdote about the early MIT machine translation software which translated the Biblical saying “The spirit is willing, but the flesh is weak” into Russian and back again, ending up with “The whisky is strong, but the meat is rotten.” As Braff’s publicizing of the Scrubs-bot shows, the topic of computational creativity is very much in right now.

Once the domain of a few lonely researchers, trapped on the fringes of computer science and the liberal arts, the question of whether a machine can be creative is everywhere. Alongside Botnik’s attempts at Harry Potter and Scrubs, we’ve recently written about a recurrent neural network (RNN) that took a stab at writing the sixth novel in the Song of Ice and Fire series, better known to TV fans as Game of Thrones. The RNN was trained for its task by reading and analyzing the roughly 5,000 pages of existing novels in the series.

Larger companies like Google have gotten in on the act, too, with its Deep Dream project, which purposely magnifies some of the recognition errors in Google’s deep learning neural networks to create wonderfully trippy effects.

[embedded content]

Right now, we’re at the “laughter” stage of computational creativity for the most part. That doesn’t have to mean outright mocking A.I.’s attempts to create, but it’s extremely unlikely that, say, an image generated by Google’s Deep Dream will hang in an art gallery any time soon — even if the same image painted by a person may be taken more seriously. It’s fair to point out that today’s machine creativity typically involves humans making some of the decisions, but the credit isn’t split between both in the same way that a movie written by two authors would be.

Rightly or wrongly, we give A.I. the same amount of credit in these scenarios that we might give to the typewriter that “War and Peace” was written on. In other words, very little. Right now, we’re in the “laughter” stage of AI creativity, but that may change soon.

But that could change very soon. Because computational creativity is doing a whole lot more than generating funny memes and writing parody scripts. NASA, for example, has employed evolutionary algorithms, which mimic natural selection in machine form, to design satellite components.

These components work well — although their human “creators” are at a loss to explain exactly how. Legal firms, meanwhile, are using A.I. to formulate and hone new arguments and interpretations of the law, which could be useful in a courtroom. In medicine, the U.K.’s University of Manchester is using a robot called EVE to formulate hypotheses for future drugs, devise experiments to test these theories, physically carry out these experiments, and then interpret the results.

IBM’s “Chef Watson” utilizes A.I. to generate its own unique cooking recipes, based on a knowledge of 9,000 existing dishes and an awareness of which chemical compounds work well together. The results are things like Turkish-Korean Caesar salads and Cuban lobster bouillabaisse that no human chef would ever come up with, but which taste good nevertheless. In another domain, video game developers Epic Stars recently used a deep learning A.I. to compose the main theme for its new game Pixelfield, which was then performed by a live orchestra.

[embedded content]

Finally, newspapers like the Washington Post are eschewing sending human reporters to cover events like the Olympics, in place of letting machines do the job.

To date, the newspaper’s robo-journalist has written close to 1,000 articles. Which brings us to our big point: Should a machine’s ability to be creative serve as the ultimate benchmark for machine intelligence? Here in 2017, brain-inspired neural networks are getting bigger, better, and more complicated all the time, but we still don’t have an obvious test to discern when a machine is finally considered intelligent.

We still don’t have definitive method for discerning when a machine is intelligent. While it’s not a serious concern of most A.I. researchers, the most famous test of machine intelligence remains the Turing Test, which suggests that if a machine is able to fool us into thinking it’s intelligent, we must therefore agree that it is intelligent. The result, unfortunately, is that machine intelligence is reduced to the level of an illusionist’s trick — attempting to pull the wool over the audience’s eyes rather than actually demonstrating that a computer can have a mind.

An alternative approach is an idea called the Lovelace Test, named after the pioneering computer programmer Ada Lovelace. Appropriately enough, Ada Lovelace represented the intersection of creativity and computation — being the daughter of the Romantic poet Lord Byron, as well as working alongside Charles Babbage on his ill-fated Analytical Engine in the 1800s. Ada Lovelace was impressed by the idea of building the Analytical Engine, but argued that it would never be considered capable of true thinking, since it was only able to carry out pre-programmed instructions.

As she said, “The Analytical Engine has no pretensions whatever to originate anything,’ she famously wrote. ‘It can do [only] whatever we know how to order it to perform.” The broad idea of the Lovelace Test involves three separate parts: the human creator, the machine component, and the original idea. The test is passed only if the machine component is able to generate an original idea, without the human creator being able to explain exactly how this has been achieved.

At that point, it is assumed that a computer has come up with a spontaneous creative thought. Mark Riedl, an associate professor of interactive computing at Georgia Tech, has proposed a modification of the test in which certain constraints are given — such as “create a story in which a boy falls in love with a girl, aliens abduct the boy, and the girl saves the world with the help of a talking cat.” “Where I think the Lovelace 2.0 test plays a role is verifying that novel creation by a computational system is not accidental,” Riedl told Digital Trends. “The test requires understanding of what is being asked, and understanding of the semantics of the data it is drawing from.” It’s an intriguing thought experiment.

This benchmark may be one that artificial intelligence has not yet cracked, but surely it’s getting closer all the time. When machines can create patentable technologies, dream up useful hypotheses, and potentially one day write movie scripts that will sell tickets to paying audiences, it’s difficult to call their insights accidental. To coin a phrase often attributed to Mahatma Gandhi, “First they ignore you, then they laugh at you, then they fight you, then you win.” Computational creativity has been ignored.

Right now, either fondly or maliciously, it is being laughed at. Next it will start fighting our preconceptions — such as the kinds of jobs which qualify as creative, which are the roles we are frequently assured are safe from automation. And after that?

Just maybe it can win.

Editors’ Recommendations

Instead of stealing jobs, what if A.I. just tells us how to do them better?

In the early part of the twentieth century, a management consultant and mechanical engineer named Frederick Taylor wrote a book, titled The Principles of Scientific Management. Workplace inefficiency, Taylor’s book argued, was one of the greatest crimes in America; robbing both workers and employers alike of achieving the levels of prosperity they deserved. For example, Taylor noted the “deliberate loafing” the bricklayers’ union of the time forced on its workers by limiting them to just 275 bricks per day when working on a city contract, and 375 per day on private work.

Taylor had other ideas. In the interests of efficiency he believed that every single act performed by a workforce could be modified and modified to make it more efficient, “as though it were a physical law like the Law of Gravity.” Others took up Taylor’s dream of an efficient, almost mechanised workforce.

Contemporaries Frank and Lillian Gilbreth studied the science of bricklaying, introducing ambidexterity and special scaffolds designed to reduce lifting. The optimal number of motions bricklayers were told to perform was pared down to between two and five depending on the job, and new measures were introduced to keep track of the number of bricks an individual laid — to both incentivize workers and reduce wastage. It’s now possible to offer workers real-time feedback in a way that no human manager ever could.

Like many management theories, Taylorism had its moment in the sun, before being replaced. Today, however, its fundamental ideas are enjoying a surprising resurgence. Aided by the plethora of smart sensors and the latest advances in artificial intelligence, it’s now possible to monitor workers more closely than ever, and offer them real-time feedback in a way that no (human) manager ever could.

A recent study from the University of Waterloo showed how motion sensors and A.I. can be used to extract insights from expert bricklayers by equipping them with sensor suits while they worked to build a concrete wall. The study discovered that master masons don’t necessarily follow the standard ergonomic rules taught to novices. Instead, they employ movements (such as swinging, rather than lifting, blocks) that enable them to work twice as fast with half the effort.

“As we all know, [an] ageing workforce is a threat to the national economy,” researcher Abdullatif Alwasel told Digital Trends. “In highly physical work, such as masonry, the problem lies in the nature of work. Masonry is highly physical and repetitive work: two major factors that are known to cause musculoskeletal injuries. However, when this kind of work is done in an ergonomically safe way, it doesn’t cause injuries.

This is apparent through the percentage of injuries in expert workers versus novice or less experienced workers. [Our team’s work] work looks at using A.I. to extract safe postures that expert workers use to perform work safely and effectively as a first step towards creating a training tool for novice workers to graduate safe and effective masons and to decrease the number of injuries in the trade.”

Alwasel describes the team’s current work as a “first step.” By the end of the project, however, they hope to be able to develop a real-time feedback system which alerts workers whenever they use the wrong posture. Thanks to the miniaturization of components, it’s not out of the question that such a sensor suit could one day be used on construction sites across America. As with Taylor’s dream, both workers and employers will benefit from the enhanced levels of efficiency.

“Our next step is to find out whether the concept of expert safe workers applies to other trades that have similar situation,” Alwasel said. “I think commercialization is a final step that has to be done to make use of this technology and we are looking for ways to do that.”

Objects that nudge back

It should be noted, however, that the classical concept of Taylorism is not always viewed entirely favorably. Critics point out that it robbed individuals of their autonomy, that it made jobs more rote and repetitive, that it could adversely affect workers’ wellbeing by causing them to over-speed, and that it assumed speed and efficiency was the ultimate goal of… well, everything really. As with so much of modern technology, a lot depends on what we gain versus what we lose.

It’s difficult to criticize a project like the University of Waterloo’s, which is focused on reducing injuries among the workforce. However, this same neo-Taylorist approach can be seen throughout the tech sector. In Amazon’s warehouses, product pickers (or “fulfillment associates”) are given handheld devices, which reveal where individual products are located and, via a routing algorithm, tell them the shortest possible journey to get there.

However, they also collect constant, real-time streams of data concerning how fast employees walk and complete individual orders, thereby quantifying productivity. Quoted in an article for the Daily Mail, a warehouse manager described workers as, “sort of like a robot, but in human form.” Similar technology is increasingly used in warehouses (not just Amazon’s) around the world. It’s not just Amazon, either.

A company called CourseSmart creates study aids that allow teachers to see whether their students are skipping pages in their textbooks, failing to highlight passages or take notes, or plain not studying. This information — even when it concerns out-of-lesson time for students, can be fed back to teachers. A university’s school of business dean described the service to the New York Times as, “Big Brother, sort of, but with a good intent.” The idea is to find out exactly what practices produce good students, and then to nudge them toward it.

These “nudges” form an increasingly large part of our lives. Rather than the subtle nudges of previous “dumb” objects (for example, the disposability of a plastic cup, which starts disintegrating after a few uses and therefore encourages you to throw it away), today’s smart technology means that we can be given constant feedback on everything from our posture to which route to take to the bathroom for a quicker toilet break to how best to study. Autonomous technology challenges the autonomy of individuals.

Whether that’s a bad thing or not depends a whole lot on your perspective. In Sarah Conly’s Against Autonomy, the author argues that we should “save people from themselves.” It’s part of a larger argument that may begin with technology to modify how you work, continue to the banning of cigarettes and excessively sized meals, and maybe even extend to spending too much of your paycheck without making the proper savings.

There are no easy answers here. As with so much of modern technology (news feeds that show us only articles they think will be of interest, smart speakers in the home, user data exchanged for “free” services, etc.), a lot depends on what we gain versus what we lose. We might be very willing to have a smart exoskeleton that tells us how not to damage our backs when lifting heavy bricks.

We may be less so if we feel that our humanity is minimized by the neverending push toward efficiency. What’s not in question is whether the tools now exist to help make this neo-Taylorism a reality. They most certainly do.

Now we need to work out how best to use them.

To paraphrase the chaos theory mathematician Dr.

Ian Malcolm (also known as Jeff Goldblum’s character in Jurassic Park), we’ve been so preoccupied with whether or not we could achieve these things, we haven’t necessarily thought enough about whether we should.

Editors’ Recommendations