Wise Owl Shopper Discounts

Emerging Tech

Five high-tech meat thermometers to ensure properly cooked meals

For some people, there’s nothing scarier in the kitchen than the color pink. While you might underwhelm dinner guests by serving a steak that is too rare or too well-done, you can also make people extremely sick if you put undercooked chicken on their plate. That being the case, here are a few meat thermometers that will make sure your meat is perfectly cooked every time.

Sur La Table Dual Sensing Probe Thermometer and Timer (£29.95)

This meat thermometer is a no-frills model that is convenient, reliable, and straightforward, and that anyone can easily learn how to use.

Thanks to the dual-setting probe, you can program settings to measure both internal food temperature and also ambient oven temperature. When the oven reaches the desired temperature, a chime goes off to let you know. Then, another alarm will sound once the meat has reached the optimal temperature.

With this multitasking tool, you can take the guesswork out of the cooking process to ensure perfectly cooked meats every time. Buy one now from: Sur La Table

The Meater (£69)

Is opening up your grill and reading the temperature displayed on Sur La Table’s Round Steak Button Thermometer a little too much work for you?

Then check out the Meater. This next-gen meat thermometer was one of the hottest items on Indiegogo in 2015 when nearly 10,000 people pledged more than £1 million to make it happen. The Meater is essentially a wireless thermometer that constantly monitors your food’s temperature as it cooks.

The thermometer also connects to a smartphone app, so you can check your food’s progress simply by glancing at your phone. The Meater knows the proper temperature for everything from beef to chicken, and will automatically alert you when your meal is properly cooked. Read more here. Buy one now from:

The Meater

Lynx Smart Grill (£7,000)

While products like the Meater actually measure the internal temperature of your meat, the Lynx Smart Grill ensures that you have perfectly cooked food in a different way. With the combination of precise cooking temperatures and programmed recipes, the Lynx Smart Grill consistently provides perfectly cooked meals. The Lynx Smart Grill practically does the cooking for you.

Once you tell the grill what you’re cooking via the connected smartphone app, the grill adjusts the temperature, sets the timer, and starts cooking. When you’re cooking something that requires human intervention, say, like flipping a burger, the Lynx will alert you when it’s time. If you choose the right recipe, select the correct meat size, and flip when the grill tells you to flip, the Lynx Smart Grill should give you something that is perfectly safe to consume.

Those who are a little paranoid about raw meat, however, still might want to use a meat thermometer before taking their first bite. Read our full review here. Buy one now from: Lynx

Weber iGrill 2 (£100)

The Weber iGrill 2 aims to take some of the stress out of grilling.

Forget the days of jumping in and out of conversations as you hover around the grill to make sure nothing is burning. With this device, you can get real-time temperature updates on your phone or the magnetic display that attaches to your grill. The Weber iGrill 2 also sports 200 hours of battery life and will keep talking to your phone via Bluetooth as long as your within its 150-foot range.

You can even set custom temperature alarms and timers on the iGrill app, and if you’re a true grilling nerd, you can graph your cooking data. Buy one now from: Weber

ThermoWorks ThermoPop (£29+)

The biggest problem with smart thermometers such as the Weber iGrill 2 or the Meater — both of which provide real-time temperatures while you’re cooking — is that the number of pieces of meat you can monitor is limited by the number of thermometers you have on hand.

While the ThermoPop may require a little extra work, this device is relatively inexpensive, reliable, and easy to use. The digital thermometer can determine the internal temperature of your meat within a single degree in just a few seconds. It also features a rotating display, which makes it easy to read at any angle.

While ThermoWorks makes a top-of-the-line meat thermometer that boasts 0.1-degree precision, aka the Thermapen, the ThemoPop is more than enough to prevent backyard cooks from serving raw meat.

Buy one now from:

ThermoWorks

Editors’ Recommendations

Strava responds to claims that its app compromised military secrecy

Fitness wearables and apps are very useful when trying to keep in shape, and members of the U.S. military have embraced the technology wholeheartedly. However, easy access to all that information online may have an unexpected downside. Strava is a social networking app geared toward athletes, where users can upload their fitness data, and it uses GPS tracking data for a variety of website applications.

One of the projects of Strava Labs is a “Global Heatmap,” an easily accessible visualization of the network data, that shows popular running and cycling routes. The heatmap boasts data from more than one billion activities all around the globe. However, military analysts told The Guardian that the level of detail in the maps can also reveal the location of secret military facilities, some of them in conflict areas.

Fitness and social media company Strava releases activity heat map.

Excellent for locating military bases (h/t to @Nrg8000). https://t.co/n5RWcI7BJF pic.twitter.com/7zzNcYV42e — Tobias Schneider (@tobiaschneider) January 27, 2018

“If soldiers use the app like normal people do, by turning it on tracking when they go to do exercise, it could be especially dangerous,” said analyst Nathan Ruser. “U.S. bases are clearly identifiable and mappable.” Forward operating bases in Afghanistan, for example, can easily be mapped by their jogging trails, even though those military installations don’t appear on services like Google Maps.

An Afghanistan veteran on the hacker site ycombinator noted, “A well-established military base, even in a combat zone, has access to Wi-Fi and cellphone network. We are constantly training physically, and we like to keep track of ourselves. We were early adopters of fitness trackers, and I used a couple of them myself also.”

In remote locations, Strava users seem to be mostly U.S. military personnel, making them easily identifiable. “In Syria, known coalition (i.e., U.S.) bases light up the night. Some light markers over known Russian positions, no notable coloring for Iranian bases,” observed analyst Tobias Schneider. “A lot of people are going to have to sit through lectures come Monday morning.” Strava has responded to these reports, defending the use of its products, and stating that users had the ability to shut off the public sharing of their activity.

In a statement to The Guardian, Strava said: “Our global heatmap represents an aggregated and anonymised view of over a billion activities uploaded to our platform. It excludes activities that have been marked as private and user-defined privacy zones. We are committed to helping people better understand our settings to give them control over what they share.” The company also shared a blog post from 2017, which detailed the ways that users could keep their activities secret.

However, it may be too little, too late.

As The National points out, users of social media have already been posting military base locations and possibly exposing ongoing covert operations in places like Mali and the South China Sea.

Following this debacle, it’s extremely likely that highly placed individuals in armies all around the world are looking at banning activity trackers.

Editors’ Recommendations

Don’t be fooled by dystopian sci-fi stories: A.I. is becoming a force for good

One of the most famous sayings about technology is the “law” laid out by the late American historian Melvin Kranzberg: “Technology is neither good nor bad; nor is it neutral.” It’s a great saying: brief, but packed with instruction, like a beautifully poetic line of code. If I understand it correctly, it means that technology isn’t inherently good or bad, but that it will certainly impact upon us in some way — which means that its effects are not neutral.

A similarly brilliant quote came from the French cultural theorist Paul Virilio: “the invention of the ship was also the invention of the shipwreck.” “Technology is neither good nor bad; nor is it neutral.” To adopt that last image, artificial intelligence (A.I.) is the mother of all ships.

It promises to be as significant a transformation for the world as the arrival of electricity was in the nineteenth and twentieth century. But while many of us will coo excitedly over the latest demonstration of DeepMind’s astonishing neural networks, a lot of the discussion surrounding A.I. is decidedly negative. We fret about robots stealing jobs, autonomous weapons threatening the world’s wellbeing, and the creeping privacy issues of data-munching giants.

Heck, once the dream of achieving artificial general intelligence arrives, some pessimists seem to think the only debate is whether we’re obliterated by Terminator-style robots or turned into grey goo by nanobots. While some of this technophobia is arguably misplaced, it’s not hard to see critics’ point. Tech giants like Google and Facebook have hired some of the greatest minds of our generation, and put them to work not curing disease or rethinking the economy, but coming up with better ways to target us with ads.

The Human Genome Project, this ain’t! Shouldn’t a world-changing technology like A.I. be doing a bit more… world changing?

A course in moral A.I.?

2018 may be the year when things start to change. While they’re still small seeds just beginning to sprout green shoots, there’s more evidence that the subject of making A.I. into a true force for good is starting to gain momentum.

For example, starting this semester, the School of Computer Science at Carnegie Mellon University (CMU) will be teaching a new class, titled “Artificial Intelligence for Social Good.” It touches on many of the topics you’d expect from a graduate and undergraduate class — optimization, game theory, machine learning, and sequential decision making — and will look at these through the lens of how each will impact society. The course will also challenge students to build their own ethical A.I. projects, giving them real world experience with creating potentially life-changing A.I.

ITU/R.Farrell

“A.I. is the blooming field with tremendous commercial success, and most people benefit from the advances of A.I. in their daily lives,” Professor Fei Fang told Digital Trends. “At the same time, people also have various concerns, ranging from potential job loss to privacy and safety issues to ethical issues and biases. However, not enough awareness has been raised regarding how A.I. can help address societal challenges.”

Fang describes this new course as “one of the pioneering courses focusing on this topic,” but CMU isn’t the only institution to offer one. It joins a similar “A.I. for Social Good” course offered at the University of Southern California, which started last year. At CMU, Fang’s course is listed as a core course for a Societal Computing Ph.D. program.

“Not enough awareness has been raised regarding how A.I. can help address societal challenges.” During the new CMU course, Fang and a variety of guest lecturers will discuss a number of ways A.I. can help solve big social questions: machine learning and game theory used to help protect wildlife from poaching, A.I. being used to design efficient matching algorithms for kidney exchange, and using A.I. to help prevent HIV among homeless young people by selecting a set of peer leaders to spread health-related information. “The most important takeaway is that A.I. can be used to address pressing societal challenges, and can benefit society now and in the near future,” Fang said. “And it relies on the students to identify these challenges, to formulate them into clearly defined problems, and to develop A.I. methods to help address them.”

Challenges with modern A.I.

Professor Fang’s class isn’t the first time that the ethics of A.I. has been discussed, but it does represent (and, certainly, coincide with) a renewed interest in the field.

A.I. ethics are going mainstream. This month, Microsoft published a book called “The Future Computed: Artificial intelligence and its role in society.” Like Fang’s class, it runs through some of the scenarios in which A.I. can help people today: letting those with limited vision hear the world described to them by a wearable device, and using smart sensors to let farmers increase their yield and be more productive.

Ekso Bionics

There are plenty more examples of this kind. Here at Digital Trends, we’ve covered A.I. that can help develop new pharmaceutical drugs, A.I. that can help people avoid shelling out for a high priced lawyer, A.I. to diagnose disease, and A.I. and robotics projects which can help reduce backbreaking work — either by teaching humans how to perform it more safely or even taking them out of the loop altogether.

All of these are positive examples of how A.I. can be used for social good. But for it to really become a force for positive change in the world, artificial intelligence needs to go beyond simply good applications. It also needs to be created in a way that is considered positive by society.

As Fang says, the possibility of algorithms reflecting bias is a significant problem, and one that’s still not well understood. The possibility of algorithms reflecting bias is a significant problem, and one that’s still not well understood. Several years ago, African-American Harvard University PhD Latanya Sweeney “exposed” Google’s search algorithms as being inadvertently racist, by linking names more commonly given to black people with ads relating to arrest records.

Sweeney, who had never been arrested, found that she was nonetheless shown ads asking “Have you been arrested?” that her white colleagues were not. Similar case studies have noticed how image recognition systems will be more likely to associate a picture of a kitchen with women and one of sports coaching with men. In this case, the bias wasn’t necessarily the fault of one programmer, but rather discriminatory patterns hidden in the large sets of data Google’s algorithms are trained on.

The same is true for the “black boxing” of algorithms, which can make them inscrutable to even their own creators. In Microsoft’s new book, its authors suggest that A.I. should be built around an ethical framework, a bit like science fiction writer Isaac Asimov‘s “Three Laws of Robotics” for the “woke” generation. These six principles include the fact that AI systems should be fair, reliable and safe; that they should be private and secure; that they should be inclusive; that they should be transparent, and that they they should be accountable.

“If designed properly, A.I. can help make decisions that are fairer because computers are purely logical and, in theory, are not subject to the conscious and unconscious biases that inevitably influence human decision-making,” Microsoft’s authors write.

More work to be done

Ultimately, this is going to be easier said than done. From most people’s perspective, A.I. research done in the private sector far outstrips work done in the public sector. The problem with this is accountability in a world where algorithms are guarded as secretly as missile launch codes.

There is also no cause for companies to solve big societal problems if it will not immediately benefit their bottom line. (Or score them some brownie points to possibly avoid regulation.) It would be naive to think that all of the concerns raised by profit-driven companies are going to be altruistic, no matter how much they might suggest otherwise. For broader discussions about the use of A.I. for public good, something is going to have to change. Is it recognizing the power of artificial intelligence and putting into place more regulations allowing for scrutiny?

Does it mean companies forming ethics boards, as was the case with Google DeepMind, as part of their research into cutting edge A.I.? Is it awaiting a market-driven change, or backlash, that will demand that tech giants offer more information about the system’s that govern our lives? Is it, as Bill Gates has suggested, implementing a robot tax that will curtail the use of A.I. or robotics in some situations by taxing companies for replacing its workers?

None of these solutions are perfect. And the biggest question of all remains: Who exactly defines ‘good’? Debates about how A.I. can be a force for good in our society will involve a significant number of users, policy makers, activists, technologists, and other interested parties working out what kind of world it is that we want to create, and how to use technology to best achieve that.

As DeepMind co-founder Mustafa Suleyman told Wired: “Getting these things right is not purely a matter of having good intentions. We need to do the hard, practical and messy work of finding out what ethical A.I. really means. If we manage to get A.I. to work for people and the planet, then the effects could be transformational.

Right now, there’s everything to play for.”

Courses like Professor Fang’s aren’t the final destination, by any means.

But they are a very good start.

Editors’ Recommendations

Airport codeword aims to stop X-ray machines blowing marriage proposals

X-ray machines and other security procedures at airports are a necessary nuisance for passengers who have to simply accept it as part of the modern-day travel experience. But with Valentine’s Day fast approaching, airport security can present a whole new challenge for loved-up folks intending to pop the question when they reach their vacation spot, as a routine bag search could result in an awkward moment as the ring box is pulled out for all to see. Every year, the machines and their operators blow the cover of at least a few of these people, culminating in a somewhat underwhelming marriage proposal (though definitely memorable!), with the couple surrounded by flustered passengers putting their belts back on instead of the planned idyllic setting of sun, sea, and sand.

In a bid to help keep the secret safe of anyone planning to propose to their partner, an airport in the U.K. has come up with an ingenious solution.

Here’s what you have to do

Officials at East Midlands Airport, about 100 miles north of London, are telling any would-be proposers to email them ahead of their arrival to let them know they’ll have a ring with them in their carry-on baggage. The airport will then send them a code-phrase that they’ll need to say to security personnel in case they’re singled out for a bag check. Once they hear it, security will take the passenger to a different lane to their partner so they won’t see the ring if it’s pulled out of the bag.

Matthew Quinney, East Midlands Airport’s head of security, said it would be “a big damper on someone’s meticulously planned romantic trip if their big surprise was revealed even before they’ve boarded the plane.” And so, with an uptick in proposals expected ahead of February 14, they decided to implement a system to prevent any awkward situations for occurring.

It’s certainly very thoughtful of the airport to consider such matters, and could save some red faces by the X-ray machine.

“With Valentine’s Day coming up, we wanted to reduce the chances of the marriage proposal being ruined at the airport because, frankly, as much as we like the airport, we don’t think it’s the most romantic place to get engaged,” Ioan Reed-Aspley, a spokesman for East Midlands Airport, told BBC Radio.

Editors’ Recommendations

Facebook is coming closer to humanizing its chatbots

Over the past several years, Facebook has devoted a considerable amount of its resources towards developing chatbots. It has made several advancements in this area, but is now focusing its efforts on improving conversational abilities. Despite their label, chatbots aren’t very good at making small talk.

In a recent report, Facebook’s researchers pointed to several key areas in which they need improvement. The first problem is that these A.I.s do not have any consistent personality. They don’t stick to the same set of facts about themselves throughout a conversation, which can make the experience feel unnatural.

Perhaps more frustrating is the fact that the A.I. can’t remember its own past responses or those of the person it is talking to, resulting in conversations that can easily go off the rails. Finally, when asked a question they don’t have an answer to, these bots will often make use of canned pre-programmed responses. Many modern chatbots are trained with lines taking from movies.

This, predictably, has some issues since even the best-written scripts are not natural conversations. Everything is written with the intent of informing the viewer about the film’s characters, world, or narrative. This can often result in strange or nonsensical responses.

In order to help remedy this problem, Facebook engineers have constructed their own datasets to help train the A.I. These datasets are taken from Amazon’s Mechanical Turk marketplace and consist of more than 160,000 lines of dialogue. The interesting thing about this data is that it isn’t entirely random.

The Verge reports that in an effort to create consistent personalities for their chatbots, Amazon’s team was instructed to create a short biography for their chatbots. For example, one of the chatbots is based on the following statements: “I am an artist. I have four children.

I recently got a cat. I enjoy walking for exercise. I love watching Game of Thrones.”

It’s hardly an award-winning novel, but it does serve to provide a bit of structure and consistency to the chatbots’ conversations, though it does have some downsides.

While these bots did score well on fluency and maintain a consistent personality, users found them less interesting than A.I. based on movie scripts.

For now, these chatbots have a long way to go before they can truly imitate human speech, but they are improving.

Editors’ Recommendations

1 2 3 47