Pope Francis warned that artificial intelligence could one day, if humans aren't careful, lead to a world where the weak are suppressed and outright ruled by those with the technological mostest, leading to a barbaric society. And as left as this pope typically leans when he speaks of policy and politics, on this, he's got a point.
Eye on A.I.
Artificial Intelligence stays mired in the average person's mind as something of a science fiction-type character, but A.I. is not one and the same as a robot. Simply put, AI is everywhere. It's guiding GPS and Google Maps. It's on Facebook. It's in Google.
Mindar is the name of a new priest tasked with delivering sermons and overseeing religious ceremonies at a 400-year-old Buddhist temple in Kyoto, Japan. Mindar is not human. Mindar is a robot. And it's just the latest sign of A.I. creeping into religions around the world.
Alibaba Chief Executive Jack Ma said at the World A.I. Conference in Shanghai that technology was moving along at a rapid enough clip that one day, humans could very well max their work week at 12 hours -- and moreover, they should. Easy there, Ma. Idle hands very often do lead to the devil's work.
As if college administrators could be trusted to determine what constitutes hate speech versus what does not -- now comes the university-created artificial intelligence aimed at doing the same. And that's "hate speech" in quotation marks.
A new survey from Pew Research shows that the number of Americans who view Big Tech through star-studded eyes has fallen the last four years -- by 21%, no less. It's about time.
File this under "Yet Another Reason To Keep Siri Voice Assistant Out Of Your Home." A whistleblower who works for Apple, the maker of Siri, said the voice assistant can accidentally activate during the most private of times -- like when people are discussing sensitive business or medical matters. Or cutting drug deals. Or having sex.
A group called Fight for the Future has just launched a campaign calling for a national ban on government's use of facial recognition technology. The call, at BanFacialRecognition.com, may be justified. But honestly, that's just one genie that's not going back into the lamp any time soon.
If one of the biggest challenges in Big Tech is to overcome inherent biases in the modeling systems -- then even a simple survey showing people automatically prefer men to women for their surgical needs takes on a new light.
Facebook insists it doesn't listen on private conversations, and then use the content of those discussions to generate targeted advertisements to the user. But here's the thing: It doesn't have to. Facebook has so many other ways of tracking, recording, surveilling, etc., that it doesn't need ears.
BBC News, equipped with cameras, went inside one of China's village reeducation camps -- and emerged not just with footage that would make a freedom-loving American gasp and gag, but also with prescient proof of why runaway Big Tech is bad for the Constitution. It's called jail first, ask questions later.
Parents, welcome to 2020. It's not just the playground bully that's the big threat to watch and monitor. It's the devices that come into the home.
With brain chipping, it's not the medical benefits to traumatized military heroes that bring ethical dilemmas. It's where the technological path leads.
Twitter's bought a new fake news finder -- a machine learning tool devised by the London-based Fabula AI, a company co-founded by Imperial College of London scientist Michael Bronstein and by chief technologist Federico Monti. Heads up, conservatives. This may be the next big social media enemy.
Amazon has a new gadget it's developing that affixes to the wrist and reads human emotions. Hmm. What could possibly go wrong here? It's not that a modern-day mood ring, which is what this smartphone app-in-works seems to be, is in itself a terrible idea. Rather, it's the future potential uses that pose the pitfall.
Researchers say they've come up with a way to help those with anger management issues using the latest in technology -- robots trained to take a punch. It's probably too soon to call for a end to barroom brawls, broken furniture and smashed phones. But maybe these 'bots can help some angry birds better cope.
The customer is always right -- except when artificial intelligence says the customer isn't. That's sort of the message being sent by new technology aimed at catching crooks before they commit their crooked acts, anyway.
IBM has developed artificial intelligence that can predict, with a reported 95 percent accuracy, when employees are just about to quit their jobs. Super snoopy surveillance? Or smart technology that can actually benefit both business and employee? The jury's still out. It's all in the ultimate application.
Google dissolved and disbanded its artificial intelligence ethics advisory board, just a week or so after its creation. Why? In brief: Heritage Foundation. In brief, the LGBTQ movement couldn't stand the thought of a conservative on the council, so the LGBTQ movement cried and demanded the conservative voice be removed.
The European Court of Human Rights, in 2016, found that artificial intelligence could predict the outcomes of cases heard by human judges with a 79 percent accuracy rating. Great. But perhaps the better lead would be this: A.I. used in the European Court of Human Rights failed to accurately predict outcomes in 21 percent of the cases.
Mark Zuckerberg, Facebook CEO, is trying to build a "brain-computer interface" -- or, in layman's, technology that can read your mind. No keyboard needed. Does anybody outside of the techno-geek crowd believe this is a good idea?
A machine that tells what's wrong with your mind does seem on the freakish-slash-frightening side of the emerging technology world.
China, according to a report from the Center for a New American Security, is warning that global controls and international agreements on artificial intelligence are needed, or else a technological "arms race" will soon enough lead to world war. America shouldn't be fooled. This is the same China that demands countries "pay their debts" on climate change, all the while bucking controls on its own production.
The fact this is even a headline -- "Bill Would Regulate Microchipping Employees in Arkansas" -- shows just how far this country has fallen off its freedom scale. But the bill is a good thing, really.
Here's a thought: If you're a parent and can't tell when your baby's bottom is wet or soiled -- you might not be what's called "a good parent." You might want to stick with being a pet owner.
If CES 2019 tells anything, it's a story of how technology is moving into every aspect of human life, from driving to securing home and possessions to parenting to -- brushing teeth. Some of the artificial intelligence serves as an apt demonstration of overkill.
Federal regulators haven't even figured out a satisfactory solution yet for dealing with drones, and here come the flying cars.
LAS VEGAS -- Ask legal opioid users if they'd toss their prescriptions if their pain could be managed another way and their answers would most assuredly be yes. Right? So maybe technology can help.
LAS VEGAS -- The Consumer Electronics Show, CES 2019, opened with a bit of embarrassment for Tesla, to put it mildly, when its self-driving model car ran over and "killed" an autonomous robot, Promobot. Making matters worse: It was a hit-and-run. Oh my, you just can't make this stuff up, people.
Truly, the biggest technology threat facing America is the unsuspecting, unknowing, unaware, perhaps too-trusting nature of the American people.
An Amazon user in Germany was just able to gain access to an estimated 1,700 voice recordings of an Alexa user -- because, get this, of a glitch at the Amazon company. That's some glitch. But the bigger glitch is these erroneously shared files gave eavesdroppers the access to enough snippets of private in-home conversations they were soon able to piece together the Alexa user's identity.
Pew Research Center asked 979 technology experts, business and policy leaders, scientists and science-minded activists and the like just how they thought artificial intelligence would impact humans by the year 2030 -- and while 63 percent waxed positive, another 37 percent warned of the negatives. That's a sizable percentage.
The U.S. Secret Service is testing a new facial recognition program at the White House, supposedly simply to identify their own volunteer agents in the public areas in the vicinity of 1600 Pennsylvania Avenue. Well, what comes next? That is indeed the question.
Walmart just announced 360 janitor robots with data-collecting capabilities will make their debuts at select stores before the end of January. Let's hope these 'bots do better than the ones sent to help astronauts at the International Space Station.
Thousands of Swedes have been busily inserting microchips beneath the skin on their hands -- for convenience's sake, for goodness sake. That's fine and dandy. For Sweden. But what's alarming is that the trend has been making a beeline for America's shores, as well.
Google seems to be taking a little skip down Big Brother lane with some George Orwell-like patent applications that give rise to images of the telescreens described in the popular "1984" novel of dystopian society -- you know, the ones where thought police watch all, hear all and take note of all for Big Government.
Google, fresh off the farm of defending last month's leak of 500,000 or so users' sensitive information, has just been hit by another Internet hijacking -- the "worst ever," according to the company that caught the hack. And what's most eye-opening is the hack is the likely work of Russian and Chinese sources.
Cosmologist Stephen Hawking made headlines from beyond the grave this October when, seven months after his death, his presumed last book was published bearing these words: "There is no God." And with that, the already wide gap separating science and religion, physical from spiritual, got a bit wider. What a shame.
China has just employed new "gait recognition" technology that can identify individuals by their manner of walk. This is police surveillance taken to a whole new level of frightening. Whispers are that America's airports might make a decent testing ground to bring the artificial intelligence here.
Oxford University researchers have devised what they say is a new artificial intelligence program that will help predict and possibly prevent religious violence around the world. It's based on psychological programming that starts with the premise that all people are naturally peaceful. And that's where the software goes wrong.
Shopping minus the cashiers -- minus the humans, even. That's where retail is headed, in large part due to the Walmart-owned Sam's Club opening of a new technologically savvy store that offers shoppers the option to check out without having to stand in line, without having to engage in human contact, without even having to remember what they came into the shop to buy.
When it comes to building artificial intelligence with good old-fashioned common sense, elusive is thy name. Many have tried. Many have failed. DARPA, the Defense Advanced Research Projects Agency, aims to rectify that by bridging technology with -- get this -- psychology.
Stephen Hawking, world-renowned theoretical physicist and cosmologist, may have died in March but the warnings of his final book, published just this week, shout from beyond the grave as something like this: Watch out, humanity, artificially intelligent beings will soon rule. And 'lest you laugh -- Hawking was regarded by many as the smartest guy in the world.
Technology's only as good as its imperfect human programmers. That's why, in the end, the best A.I. should always be a partner to humankind, not a replacement.
A few months ago, Google's DeepMind department discovered that in a gathering game over Who Can Get the Most Apples, vying artificial intelligence systems wouldn't hesitate to go aggressive and shoot to injure, stop or even kill, if need be. That's a bit of problem, given the push to integrate A.I. into nearly all aspects of humanity.
A brothel of robotic sex dolls set to open shop in Texas this month hit a snag after local authorities, fueled by a field of concerned petitioners, found a building inspection gig and put a temporary stop to KinkySDollS' plans. There's a blessing in disguise. Sometimes regulation really does work for good, yes?
An Amazon virtual digital assistant owner in San Francisco was just creeped out when his Alexa announced, out of the blue, "Every time I close my eyes, all I see is people dying." Say what?
Facebook just found -- or more to truth, just acknowledged -- a glitch in its security system that allowed hackers to take control of up to 50 million accounts. There's a case-in-point of why a technological world is a vulnerable world.
Google's Street View fleet of cars is being outfitted with updated pollution-recording devices to patrol streets in Europe and in the United States, and monitor fluctuating levels of air quality. Make way for the patrolling pollution police -- bringing regulations and new compliance costs to a neighborhood near you.
Followers of Christ, with growing frequency -- with alarming frequency, perhaps -- are jumping aboard an artificial intelligence bandwagon and trying to merge today's technology with yesterday's godly creations and in the end, come up with a race of people who are, in the words of the Christian Transhumanist Association, "more human." Eat from the tree of knowledge of good and evil much?
The headline from RiskandInsurance.com says it all: "Machine Learning Could Make Hackers Practically Unstoppable: Are You Ready?" Good question. Serious question. And one which, no doubt, has registered as barely a blip on the collective minds of busy, technologically driven Americans.
Watch out America -- that patient-doctor relationship is about to be blown apart by Big Technology, Big Government and Big Business. It's also being pushed down the very same road walked by the former USSR.
Employees with the technology firm Three Square Market have been quietly, steadily inserting microchips into their own hands as a means of making it easier to pay for the likes of snacks from company vending machines or drinks from the cafeteria. Subtitle this: When Convenience Becomes Downright Creepy.
There aren't many in America who would begrudge police the tools to protect themselves -- to avail themselves of whatever technological devices are at their disposal to rid the streets of criminals, keep citizens safe and at the end of the shift, head home healthy and unhurt to their families and loved ones. But not at the expense of the Constitution.
Researchers have discovered a way to stare artificial intelligence deep into the windows of humans' souls and emerge with a score card on personality as it pertains to four traits: neuroticism, extraversion, agreeableness and conscientiousness. Eye-gazing -- technology's next venture toward omniscience.
More than 160 companies with divisions dedicated to advancing artificial intelligence just signed on to a pledge to "neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons," or LAWS, the text states. That's nice; very peace-keeping-ish. But that's also a bit naive.
To test Biometric Mirror's biases, and to see how artificial intelligence might rate my personality based on a snapshot fed into facial recognition software, I submitted a photo to Wouters for analysis. The findings were inaccurate, to say the least.
Dozens of members of Congress joined forces to request Amazon CEO Jeff Bezos explain the recent "Rekognition" facial recognition flap that misidentified 28 members of Congress as suspected criminals. Seems valid. Bezos does have some questions to answer.
Congress needs to step up its regulatory game and enact some standards of use for facial recognition technology, at least on law enforcement. That Amazon's "Rekognition" system just falsely identified 28 faces who serve in Congress as criminals only underscores the dire need for some sort of speedy clampdown.
Imagine a day when applying for a job doesn't just include a personality test, but also a facial recognition scan that seeks to determine a new-hire's workplace suitability by analyzing features for trustworthiness, likability and emotional stability. Could you pass the test? More to point: Would you even want to take such an intrusive test?
Google has put in place some ethical rules to guide its company's artificial intelligence pursuits. And the principles do show promise. But let's be clear: The devil remains in the details. It's one thing to lay out a path to walk, a wish-list to fulfill. It's another thing entirely to have the technological know-how to fulfill these goals.