Connect with us

Technology

Most Americans are recorded 238 TIMES a week by security cameras, study reveals 

Published

on

most americans are recorded 238 times a week by security cameras study reveals

The typical American is recorded by security cameras 238 times a week, according to a new report from Safety.com.

That figure includes surveillance video taken at work, on the road, in stores and in the home. 

The study found that Americans are filmed 160 times while driving, as there are about an average of 20 cameras on a span of 29 miles. 

And the average employee  has been spotted by surveillance cameras at 40 times a week.

However, for those who frequently travel or work in highly patrolled areas the number of times they are captured on film skyrockets to more than 1,000 times a week. 

Scroll down for video 

Security cameras record the average American 238 times a week, according to a new report, including 14 times a week by wireless doorbell cameras like Amazon's Ring device.

Security cameras record the average American 238 times a week, according to a new report, including 14 times a week by wireless doorbell cameras like Amazon's Ring device.

Security cameras record the average American 238 times a week, according to a new report, including 14 times a week by wireless doorbell cameras like Amazon’s Ring device.

Safety.com, an independent site that reviews safety products and technology, warns that it’s difficult to know how many traffic cameras are just passively filming or permanently storing footage.

Cameras are also frequently installed in stores, transportation hubs, nightclubs and elsewhere. 

The average employee is filmed 40 times a week at or around their job, though it can vary tremendously depending on the environment.

Retail employees might be filmed hundreds of times a week but in an office situation ‘there might be a single camera at the entrance, if at all,’ the researchers said.

The typical driver will pass more than 20 cameras every day, according to Safety.com. The site warns that it's hard to know how many traffic cameras are just passively filming or permanently storing footage

The typical driver will pass more than 20 cameras every day, according to Safety.com. The site warns that it's hard to know how many traffic cameras are just passively filming or permanently storing footage

The typical driver will pass more than 20 cameras every day, according to Safety.com. The site warns that it’s hard to know how many traffic cameras are just passively filming or permanently storing footage

‘We took this into account as best as possible to find the most accurate average.’

The team said people drastically underestimate how much they’re being recorded. A 2016 survey from the video-surveillance publication IPVM found that the majority of people assumed they were being recorded less than five times a day.

The publication put the number closer to 50, though it did not include street and traffic cameras.

The growing surveillance state has drawn concern from lawmakers and civil rights advocates alike.

By next year, there will be an estimated one billion security cameras around the globe, CNBC reports, with 10 percent to 18 percent of them in the US alone.

Sales of wireless doorbell cameras is expected to soar from 3.9 million units in 2019 to 5.6 million in 2023. A recent study showed criminals can determine if a homeowner is away just by analyzing the rate at which their camera uploaded data to the Internet

Sales of wireless doorbell cameras is expected to soar from 3.9 million units in 2019 to 5.6 million in 2023. A recent study showed criminals can determine if a homeowner is away just by analyzing the rate at which their camera uploaded data to the Internet

Sales of wireless doorbell cameras is expected to soar from 3.9 million units in 2019 to 5.6 million in 2023. A recent study showed criminals can determine if a homeowner is away just by analyzing the rate at which their camera uploaded data to the Internet

In 2019, there were 70 million security cameras in the US, or at least one for every 4.6 Americans.

That’s the second-highest ratio after China, which has a camera for every 4.1 people. 

‘We expect this number to continually increase and normalize the presence of security cameras as technology and facial recognition improves,’ Safety.com researchers said in a statement.  

Doorbell cameras are a fast-growing segment of surveillance technology, with 3.9 million in US homes owning them already, according to Statista, and 5.6 million expected to by 2023.

The average American is on film in their house or neighborhood 14 times a week. 

Not only do smart home security cameras raise privacy issues, they can actually put owners in jeopardy.

According to a recent study, criminals can determine if someone is home just by tracking data from their wireless cameras.

This was done without even watching the footage itself but by looking at the rate at which cameras uploaded data via the Internet.

Researchers from the Chinese Academy of Science and Queen Mary University of London found that future activity could be predicted based on past patterns.

‘Once considered a luxury item, these cameras are now commonplace in homes worldwide,’ said co-author Gareth Tyson, a computer science professor at Queen Mary University of London. ‘As they become more ubiquitous, it is important to continue to study their activities and potential privacy risks.’ 

 At its annual hardware event Thursday, Amazon unveiled The Ring Always Home Cam, which is stationed atop a flying drone.

Its camera streams a live view from inside in home to the user’s smartphone, based on a predetermined flight path, and can take footage from multiple viewpoints.

HOW DOES FACIAL RECOGNITION TECHNOLOGY WORK?

Facial recognition software works by matching real time images to a previous photograph of a person. 

Each face has approximately 80 unique nodal points across the eyes, nose, cheeks and mouth which distinguish one person from another. 

A digital video camera measures the distance between various points on the human face, such as the width of the nose, depth of the eye sockets, distance between the eyes and shape of the jawline.

A different smart surveillance system (pictured) can scan 2 billion faces within seconds has been revealed in China. The system connects to millions of CCTV cameras and uses artificial intelligence to pick out targets. The military is working on applying a similar version of this with AI to track people across the country 

A different smart surveillance system (pictured) can scan 2 billion faces within seconds has been revealed in China. The system connects to millions of CCTV cameras and uses artificial intelligence to pick out targets. The military is working on applying a similar version of this with AI to track people across the country 

A different smart surveillance system (pictured) can scan 2 billion faces within seconds has been revealed in China. The system connects to millions of CCTV cameras and uses artificial intelligence to pick out targets. The military is working on applying a similar version of this with AI to track people across the country 

This produces a unique numerical code that can then be linked with a matching code gleaned from a previous photograph.

A facial recognition system used by officials in China connects to millions of CCTV cameras and uses artificial intelligence to pick out targets.

Experts believe that facial recognition technology will soon overtake fingerprint technology as the most effective way to identify people. 

<!—->Advertisement

This post first appeared on dailymail.co.uk

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

US election: Experts ‘easily hijack’ the apps of both Biden and Trump

Published

on

By

us election experts easily hijack the apps of both biden and trump

Security experts reported being able to ‘easily hijack’ the 2020 election apps of both US President Donald Trump and his opponent, Democrat Party nominee Joe Biden.

The exposure of the vulnerabilities came just days after Mr Trump falsely claimed that ‘nobody get hacked’ during a campaign event in Tucson, Arizona, on Monday.

‘To get hacked you need somebody with 197 IQ and he needs about 15 per cent of your password,’ the President had continued. 

Mr Trump’s comments — which experts branded ‘dumb’ — were made despite that fact that both his Twitter account and hotel chain have both previously been hacked.

Norwegian app security firm Promon used a well-known vulnerability in the Android operating system to add fakes screens to the two candidate’s election apps.

While their additions were comical — showing Mr Biden in a ‘MAGA’ cap and making Mr Trump’s app fundraise for his opponent — the exploit may be used maliciously.

For example, hackers can easily force vulnerable to prompt users into handing over sensitive information — such as usernames, passwords or even credit card details. 

Scroll down for video 

Security experts reported being able to 'easily hijack' the 2020 election apps of both US President Donald Trump and his opponent, Democrat Party nominee Joe Biden. The exposure of the vulnerabilities came just days after Mr Trump, pictured, falsely claimed that 'nobody get hacked' during a campaign event in Tucson, Arizona, on Monday

Security experts reported being able to 'easily hijack' the 2020 election apps of both US President Donald Trump and his opponent, Democrat Party nominee Joe Biden. The exposure of the vulnerabilities came just days after Mr Trump, pictured, falsely claimed that 'nobody get hacked' during a campaign event in Tucson, Arizona, on Monday

Security experts reported being able to ‘easily hijack’ the 2020 election apps of both US President Donald Trump and his opponent, Democrat Party nominee Joe Biden. The exposure of the vulnerabilities came just days after Mr Trump, pictured, falsely claimed that ‘nobody get hacked’ during a campaign event in Tucson, Arizona, on Monday

‘The President’s statement sadly reflects a widely believed sentiment that secure passwords will protect you from hackers and that hacking, in general, doesn’t affect the average citizen,’ said Promon Chief Technology Officer Tom Lysemose Hansen.

‘Sadly, this isn’t the case. Absolutely nothing is “unhackable” and even the most secure, high profile accounts are vulnerable should the user fall victim to a phishing attack which seeks usernames and passwords.’

‘The claim that “nobody gets hacked” is simply untrue and — given the influence of the President — can have dangerous impacts on the behaviour of hundreds of thousands of people, he added.

The nature of cybercrime is constantly evolving, Mr Lysemose Hansen warned — adding that malicious attacks often take advantage of current events, such as the US election, or prominent figures to maximise their chances of success.

He added that for ‘security-sensitive apps —such as banking or medical apps —implementing protocols that prevent spyware from spoofing or recording what happens on the app’s screen is crucial if developers are to [stop] hackers.’

Google — and similar operators of other online app and software stores — face ongoing challenges from hackers who aim to sneak their malicious programs onto user devices to secretly harvest sensitive and personally identifiable information.

Promon, however, have some tips to help you remain secure online. 

‘We would advise that users always keep their devices up-to-date and running the latest firmware and that they only ever download apps created by trusted developers,’ Mr Lysemose Hansen said.

‘One way to check this is to see if the developer has created any other apps and check the reviews for any and all apps they have developed.’

Norwegian app security firm Promon used a well-known vulnerability in the Android operating system to add fakes screens to the two candidate's election apps. While their additions were comical — showing Mr Biden in a 'MAGA' cap and making Mr Trump's app fundraise for his opponent (pictured) — the exploit may be used maliciously. For example, hackers can easily force vulnerable to prompt users into handing over sensitive information — such as usernames, passwords or even credit card details

Norwegian app security firm Promon used a well-known vulnerability in the Android operating system to add fakes screens to the two candidate's election apps. While their additions were comical — showing Mr Biden in a 'MAGA' cap and making Mr Trump's app fundraise for his opponent (pictured) — the exploit may be used maliciously. For example, hackers can easily force vulnerable to prompt users into handing over sensitive information — such as usernames, passwords or even credit card details

Norwegian app security firm Promon used a well-known vulnerability in the Android operating system to add fakes screens to the two candidate's election apps. While their additions were comical — showing Mr Biden in a 'MAGA' cap (pictured) and making Mr Trump's app fundraise for his opponent — the exploit may be used maliciously. For example, hackers can easily force vulnerable to prompt users into handing over sensitive information — such as usernames, passwords or even credit card details

Norwegian app security firm Promon used a well-known vulnerability in the Android operating system to add fakes screens to the two candidate's election apps. While their additions were comical — showing Mr Biden in a 'MAGA' cap (pictured) and making Mr Trump's app fundraise for his opponent — the exploit may be used maliciously. For example, hackers can easily force vulnerable to prompt users into handing over sensitive information — such as usernames, passwords or even credit card details

Norwegian app security firm Promon used a well-known vulnerability in the Android operating system to add fakes screens to the two candidate’s election apps. While their additions were comical — showing Mr Biden in a ‘MAGA’ cap (right) and making Mr Trump’s app fundraise for his opponent (left) — the exploit may be used maliciously. For example, hackers can easily force vulnerable to prompt users into handing over sensitive information — such as usernames, passwords or even credit card details

The technical expertise required to hack the election apps was minimal, Mr Lysemose Hansen added — and certainly does not call for an IQ of 197.

‘Regardless of whether you are new to the world of hacking or are a world-leading security researcher, it is not difficult to hack these apps,’ he explained.

‘Due to this critical Android vulnerability being so well-known, hackers can easily hijack these apps and overlay fake screens.’

These, he added, ‘can depict anything the attacker wants, including screens that ask the user to hand over sensitive information, such as usernames and passwords.’ 

WHICH SMART HOUSEHOLD GADGETS ARE VULNERABLE TO CYBER ATTACKS?

From devices that order our groceries to smart toys that speak to our children, high-tech home gadgets are no longer the stuff of science fiction.

But even as they transform our lives, they put families at risk from criminal hackers taking advantage of security flaws to gain virtual access to homes.

A June 2017 Which? study tested whether popular smart gadgets and appliances, including wireless cameras, a smart padlock and a children’s Bluetooth toy, could stand up to a possible hack.

The survey of 15 devices found that eight were vulnerable to hacking via the internet, Wi-Fi or Bluetooth connections. 

Scary: Which? said ethical hackers broke into the CloudPets toy and made it play its own voice messages. They said any stranger could use the method to speak to children from outside

Scary: Which? said ethical hackers broke into the CloudPets toy and made it play its own voice messages. They said any stranger could use the method to speak to children from outside

Scary: Which? said ethical hackers broke into the CloudPets toy and made it play its own voice messages. They said any stranger could use the method to speak to children from outside

The test found that the Fredi Megapix home CCTV camera system operated over the internet using a default administrator account without a password, and Which? found thousands of similar cameras available for anyone to watch the live feed over the internet.

The watchdog said that a hacker could even pan and tilt the cameras to monitor activity in the house.

SureCloud hacked the CloudPets stuffed toy, which allows family and friends to send messages to a child via Bluetooth and made it play its own voice messages.

Which? said it contacted the manufacturers of eight affected products to alert them to flaws as part of the investigation, with the majority updating their software and security. 

<!—->Advertisement

This post first appeared on dailymail.co.uk

Continue Reading

Technology

Democratic congresswoman gets one of the largest Twitch streams ever

Published

on

By

democratic congresswoman gets one of the largest twitch streams ever

A live-stream of Democratic congresswoman Alexandria Ocasio-Cortez playing the popular game ‘Among Us’ amassed almost half a million viewers last night. 

During the event, which lasted for more than three hours, the politician urged people to vote in the upcoming US election. 

The stream of Ocasio-Cortez, who is known by her initials AOC, has become the third largest on the Amazon-owned online gaming site, Twitch.  

Among Us is a multiplayer social deduction game set in space where players have to uncover murderous impostors who attempt to sabotage a mission. 

Twitch confirmed to Mashable that the broadcast peaked at 439,000 views.

The record is 667,000 held by popular streamer Ninja when he teamed up with rapper Drake for a game of Fortnite in 2018. 

Scroll down for video 

Alexandria Ocasio-Cortez (AOC) had 439,000 views on video live streaming service Twitch when she played multiplayer game Among Us

Alexandria Ocasio-Cortez (AOC) had 439,000 views on video live streaming service Twitch when she played multiplayer game Among Us

AOC had tweeted earlier in the week: ‘Anyone want to play Among Us with me on Twitch to get out the vote? (I’ve never played but it looks like a lot of fun).’ 

For the session, she was joined by popular Twitch streamers Pokemane and Hasanabi and the stream kicked off around 8:40pm ET on Tuesday (1:40am BST Wednesday).   

The New York representative spent the start of the broadcast trying to work out how the game works and fretting that she couldn’t kill anyone. 

‘Oh my gosh guys, I can’t believe I have to kill people in this,’ she said. 

Alexandria Ocasio-Cortez seen here during a hearing before the House Oversight and Reform Committee on August 24, 2020 in Washington, DC

Alexandria Ocasio-Cortez seen here during a hearing before the House Oversight and Reform Committee on August 24, 2020 in Washington, DC 

In Among Us, one or two players are randomly selected as ‘impostors’ who must then sabotage a team of crewmates by bloodily murdering them. 

Ocasio-Cortez was initially nervous at the prospect of being the duplicitous impostor during the session.  

‘Guys, I really don’t want to be impostor, please don’t let me be impostor – I’m so nervous,’ she said.

However, she was designated as an impostor in her very first game and rose to the task by slaughtering her teammate and pretending to have found the body. 

Video shows musician Maia was killed by AOC brutally within the first five minutes.

Alexandria Ocasio-Cortez as she appeared during the gameplay. Among Us is an online multiplayer social deduction game where players are tasked with being murderous 'impostors'

Alexandria Ocasio-Cortez as she appeared during the gameplay. Among Us is an online multiplayer social deduction game where players are tasked with being murderous ‘impostors’ 

AOC was joined by fellow congresswoman Ilhan Omar during the stream and both politicians took time during the broadcast to encourage viewers to vote.

Her Twitch page included a link to iwillvote.com, where Americans can register to vote for the presidential election on November 3.   

‘Make sure that you have your voting plan put together,’ AOC said.

‘Figure out if you want to vote early, mail in, in-person, day of. Make your plan and stick to it… let’s all participate in this election and save our democracy!’

AOC later tweeted: ‘Thank you so much for joining… I had a blast.’  

AOC as she appears following her murder and the discovery of the body on the Twitch game Among Us

AOC as she appears following her murder and the discovery of the body on the Twitch game Among Us

Congresswoman Ilhan Omar (pictured) joined AOC and popular Twitch personalities on Tuesday night

Congresswoman Ilhan Omar (pictured) joined AOC and popular Twitch personalities on Tuesday night 

For the session, she was joined by popular Twitch streamers Pokemane and Hasanabi and the stream kicked off around 8:40pm ET on Tuesday (1:40am BST Wednesday)

For the session, she was joined by popular Twitch streamers Pokemane and Hasanabi and the stream kicked off around 8:40pm ET on Tuesday (1:40am BST Wednesday)

This post first appeared on dailymail.co.uk

Continue Reading

Technology

Disturbing deepfake tool on popular messaging app Telegram is forging NUDE images of underage girls

Published

on

By

disturbing deepfake tool on popular messaging app telegram is forging nude images of underage girls

Photos underage girls share on their social media accounts are being faked to appear nude and shared on messaging app Telegram, a new report discovered.

The disturbing images are created using a simple ‘deepfake’ bot that can virtually remove clothes using artificial intelligence, according to report authors Sensity.

More than 100,000 non-consensual sexual images of 10,000 women and girls have been shared online that were created using the bot between July 2019 and 2020. 

The majority of the victims were private individuals with photos taken from social media – all were women and some looked ‘visibly underage’, Sensity said.

Sensity says what makes this bot particularly scary is how easy it is to use as it just requires the user to upload an image of a girl, click a few buttons and it then uses its 'neural network' to determine what would be under the clothes and produce a nude

Sensity says what makes this bot particularly scary is how easy it is to use as it just requires the user to upload an image of a girl, click a few buttons and it then uses its ‘neural network’ to determine what would be under the clothes and produce a nude

This form of ‘deepfake porn’ isn’t new, the technology behind this bot is suspected to be based on an tool produced last year called DeepNude.

DEEPFAKES USE AI TO CREATE MANIPULATED MEDIA CONTENT

Deepfakes are so named because they are made using deep learning, a form of artificial intelligence, to create fake videos and images.

They are made by feeding a computer an algorithm, or set of instructions, as well as lots of images and audio of the target person.

The computer program then learns how to mimic the person’s facial expressions, mannerisms, voice and inflections.

With enough video and audio of someone, you can combine a fake video of a person with fake audio and get them to say anything you want.

At a simpler level it can also be used to remove clothing from a photo of a person fully dressed or make someone appear to be in a place they shouldn’t.

It has been described as ‘photoshop on steroids’ by experts.

Advertisement

The artificial intelligence service was launched online and was relatively complicated to use, but allowed people to upload a photo of a woman and the AI would determine what that image would look like if the clothes were removed. 

It was removed from the internet within 24 hours, but Sensity suspect that this new bot is based on a cracked version of that technology. 

Sensity says what makes this bot particularly scary is how easy it is to use as it just requires the user to upload an image of a girl, click a few buttons and it then uses its ‘neural network’ to determine what would be under the clothes and produce a nude.

‘The innovation here is not necessarily the AI in any form,’ Giorgio Patrini, CEO of deepfake-research company Sensity and coauthor of the report told CNet.

‘It’s just the fact that it can reach a lot of people, and very easily.’

Deep fakes are computer-generated and often very realistic images and videos that are produced from a real world template. They’ve been used to manipulate elections, for pornography and to promote misinformation. 

Patrini says this new move, to use photos of private individuals and ‘fake them’ to appear nude, is relatively new and puts anyone with a social media account at risk. 

The bot, which hasn’t been named, runs on the Telegram private messaging platform, which heavily promotes the idea of free speech.  

Bot administrator, known as ‘P’, told the BBC the service was purely for entertainment and that it ‘does not carry violence’.

‘No one will blackmail anyone with this, since the quality is unrealistic,’ adding that any underage images are removed and the user blocked for good. 

The bot network, where the images are produced and shared, was found to have over 100,000 members, mostly based in Russia or Eastern Europe, Sensity found.

About 70 per cent of all of the images used in the app came from social media or private sources – such as pictures of friends or people the users know.

‘As soon as you share images or videos of yourself and maybe you’re not so conscious about the privacy of this content, who can see it, who can steal it, who can download it without you knowing, that actually opens the possibility of you being attacked,’ Patrini told Buzzfeed News.

The bot was primarily advertised on the Russian social networking service VK. The social platform said it doesn’t tolerate that content and removes it when found.

Users upload a photo from social media of a woman or girl, the bot uses artificial intelligence to determine what it 'could' look like under the clothes and creates a fake nude

Users upload a photo from social media of a woman or girl, the bot uses artificial intelligence to determine what it ‘could’ look like under the clothes and creates a fake nude

Users of the service are primarily based in Russia and Eastern Europe and have shared more than 100,000 images of over 10,000 women and girls since July 2019

Users of the service are primarily based in Russia and Eastern Europe and have shared more than 100,000 images of over 10,000 women and girls since July 2019

‘Many of these websites or apps do not hide or operate underground, because they are not strictly outlawed,’ Patrini told the BBC.

‘Until that happens, I am afraid it will only get worse.’

The authors also expressed concern that as the deep fake technology improves, these sort of bots could be used for the extortion of women.

While most deep fakes until now have focused on celebrities or politicians, the users of this app seem more interested in pictures of people they know.

A survey of bot users by Sensity found that 63 per cent were using it to get an idea of what women they know look like without clothes on. 

Two eerily realistic videos featuring Boris Johnson (right) and rival Jeremy Corbyn (left) endorsing each other

They have been released by a thinktank to highlight the spread of deep-fake technology

It is more common for celebrities and politicians to be targeted with deepfakes. During the last UK general election videos were produced that appeared to show Jeremy Corbyn (left) and Boris Johnson (right) endorsing one another 

Report authors have shared their findings with law enforcement agencies, VK and Telegram but haven’t had any response to their concerns. 

‘Our legal systems are not fit for purpose on this issue,’ said Nina Shick, author of Deep Fakes and the Infocalypse when speaking to the BBC. 

‘Society is changing quicker than we can imagine due to these exponential technological advances, and we as a society haven’t decided how to regulate this.

‘It’s devastating, for victims of fake porn. It can completely upend their life because they feel violated and humiliated.’

The full report is available from Sensity

This post first appeared on dailymail.co.uk

Continue Reading

Trending

Copyright © 2020 DiazHub.