How to Protect Yourself from Camera and Microphone Hacking

There have long since been rumors of possible camera and microphone hacking opportunities on Apple devices, leading Apple to disable the Walkie Talkie audio chat feature in Apple – a vulnerability in apple’s system. The video and audio chatting app Zoom also experienced a flaw in its system where users who deleted the app could receive a video call and open it. Very well-directed advertisements added fear into those who believed they were being listened to a recorded. Here are a few tips to stay safe on your devices:

  • When possible, use online features instead of downloading an app, such as for Skype and Zoom which both fully have their in-app features online also; this will decrease an individual’s attack surface, in turn reducing the number of vulnerabilities for hackers to latch on to. 
  • Check your device’s ‘permissions and privacy’ settings! Many apps request permission to access one’s camera, microphone, location, etc.; knowing what permissions are enabled and disabling the ones that are unnecessary will reduce the amount of direct connections there are to a device.
  • Always check for updates, for manufacturers will often issue updates to ensure better privacy features (especially when flaws are identified), and keep devices and apps up to date.
  • An easy and additional method to protect your privacy is to seal your laptop/computer/device cameras with a sticker or tape; it is not as effective on microphones, so trying a microphone blocker/plug may be more helpful.

Reference :

https://www.consumerreports.org/privacy/how-to-protect-yourself-from-camera-and-microphone-hacking/

Bulgaria Detains Cybersecurity Employee Tax Data Hacking

A 20-year-old Bulgarian employee of a cybersecurity company has been detained as a suspect who is allegedly behind the hacking of the national revenue agency, an attack that impacted 5 million out of the 7 million population in Bulgaria through the leakage of both personal and corporate data. Prime Minister Boyko Borissov later revealed that the suspect was simply trying to prove his computer skills, but that he should have worked for the state instead of causing harm.

Reference:

https://www.voanews.com/europe/bulgaria-detains-cybersecurity-employee-tax-data-hacking/

Microsoft has Warned 10,000 Victims of State-sponsored Hacking

Microsoft has revealed that there have been 10,000 victims, mostly enterprise customers, that were either targeted or compromised by hackers working for a foreign government. This incident demonstrates the significant extent to which nation-states continue to rely on cyberattacks as a method for gaining information, insight, etc. So far, there have been 781 notifications through Microsoft’s AccountGuard technology of state-sponsored attacks on organizations – attacks which have been directed mostly toward U.S.-based organizations. The biggest suspects are Russia, North Korea, and Iran. Microsoft called out and pursued action against hacker groups APT 33, APT 35, and APT 28, also known as Holmium, Phosphorus, and Fancy Bear respectively. Many attacks appear to be related to ongoing efforts to attack the democratic process, and it is expected that there will be an attack on the 2020 presidential election.

Reference:

https://techcrunch.com/2019/07/17/microsoft-state-sponsored-hacking/

Hackers are Stealing Years of Call Records from Hacked Cell Networks

Boston-based company Cybereason discovered a breach in multiple phone provider networks, which was carried out by an unknown group of hackers who have been conducting targeted surveillance on individuals on interest for seven years, having hacked ten call networks globally in order to obtain massive amounts of call records – time, date, and location – on 20 individuals. They are able to track the immediate location of victims, which include spies and politicians, and are able to obtain detailed insight on personal life through call detail records (CDR) which are highly detailed metadata logs generated by all phone providers; they have also been used by the National Security Agency, which has caused quite a stir regarding the legality of this action. It is thought that the hackers are latching onto vulnerabilities such as finding a weak spot on an internet-connected web server and then further stealing credentials from machinery to get deeper access, until eventually they gained access to the domain controller, making it unnecessary to deploy malware on each device. Later on, the target’s data is compressed and a virtual private network connection is created in one of the cell provider’s compromised servers – similar to leaving a bookmark in an unfinished book – so that the hackers can pick up where they left off without having to constantly find their way back. The hackers’ knowledge of similar providers’ networks allow for quicker, more efficient attacks on large and small companies alike. There are no detected cases in North America currently, but Cybereason sounded the alarm to alert other telecom companies. There is a strong belief that the culprit may be a hacker group backed by China, but it is a delicate topic amongst speculations regarding Huawei, a Chinese telecoms giant accused by U.S. authorities of being a proxy for China’s cyberspies, and U.S. accusations of China breaking an anti-hacking deal with the U.S.

Reference:

https://techcrunch.com/2019/06/24/hackers-cell-networks-call-records-theft/

The Gender Gap in Computer Science Research Won’t Close for 100 Years

The gender gap in many fields is becoming very evident and is a growing issue in addition to the growing pressure to address gender inequality. There are low numbers of women in the scientific fields, especially in computer science where the low numbers are a result of the biases of males who are managing conferences and male scientific journal editors. Large technology companies are facing the pressure of addressing sexual harassment in the workplace in addition to acknowledging and fixing the lack of representaion of women and minorities within the white- and male-dominated workplace population. There have been suspicions that the technology underlying the computer algorithm technology used for hiring workers is somehow building biases on top of debate that A.I. technology such as facial recognition technology can also easily provide platforms for biases.

In 2018, almost three times as many computer science articles written by men were published than those written by women, showing the disparity between the sexes in the computer science field. Changes in the number of female authors each year were recorded, and it is expected that gender parity will be reached by 2137, a much farther prediction than other fields (i.e. 2048 for biomedicine); yet researchers say that there is also a possibility that even this will never be reached. This issue also applies to the technology field as well as academia, where there is a shortage of female workers to teach and mentor the future generation of workers; studies show that this contributes to women being less likely to enter, to stick to, and to become leaders in the fields of science and mathematics. To make things worse, studies show that male workers are showing to be increasingly unwilling to collaborate with female researchers.

Reference:
https://www.nytimes.com/2019/06/21/technology/gender-gap-tech-computer-science.html?ref=oembed

Better Cybersecurity Even in Harsh Environments

Keeping personal and sensitive data safe and secure is of crucial importance. Unfortunately, many – if not most – people don’t realize this until it’s too late. But when it comes to data protection, the ones who need to worry about it the most are military personnel – not only could the military lose enormous amounts of money if some types of information were to get into the wrong hands, but human lives could be in danger too.
Traditional methods for data protection involve installing software, which although useful, is not an ideal approach. For one, it requires frequent updates, and two, it requires large amounts of computational power. Not to mention it’s far from being 100% secure.
But there is a new, alternative security method which is not based on software but on hardware. Called physically unclonable function (PUF) devices, this type of technology holds great promise. When this hardware is fabricated, random physical variations occur which are impossible to clone or copy, making it super-secure and perfect for military uses. The problem is, current PUF devices are sensitive to harsh environments, and since many military complexes are situated at places with severe weather conditions, PUFs don’t appear to be a great choice after all.
The good news is that scientists have now created new PUF hardware that is based on nano-electromechanical switches that can withstand exposure to high temperatures, microwaves, and high-dose radiation. Furthermore, in the case of a data breach, the technology can self-destruct.

Source:
American Chemical Society via ScienceDaily (https://www.sciencedaily.com/releases/2017/12/171213095359.htm)

Keeping Data Private With AI

Artificial intelligence (AI) usually takes advantage of machine learning, a field of computer science that enables computers to think and learn similarly to humans – based on examples, without programming. So what if we combined the power of AI with data safety? A team of researchers at the University of Helsinki, Finland, has recently done just that. They developed a new privacy-aware machine learning method that keeps data private and secure.
Most sensitive data needs to be private and safe, but this is especially true with data used in applications concerning human behavior and health. However, current security and privacy methods don’t allow for complete privacy – one party with unrestricted access to all data is always needed.
Things may change with the new privacy-aware machine learning methods. These approaches are based on the concept of differential privacy, meaning they can reveal only limited information on each data subject. Here is how Assistant Professor Antti Honkela of the University of Helsinki explains it: “Our new method enables learning accurate models, for example, using data on user devices without the need to reveal private information to an outsider.”
Interestingly, the new method is not only applicable to data protection – the team also used it to predict cancer drug efficacy using gene expression. And unlike other AI-based privacy models, this method is able to learn from smaller data, which makes everything easier.

Reference:
University of Helsinki (https://www.helsinki.fi/en/news/data-science/new-ai-method-keeps-data-private)

Fooling Speaking Recognition Systems Is Not as Hard as It Sounds

Applications that function with voice commands are becoming increasingly popular. Nowadays, pretty much every new smartphone has at least one application that functions using audio signals – users dictate messages, translate words and phrases, do search queries and many other things using only their voice, which explains their popularity. However, although they may appear quite secure, voice applications pose serious security concerns.
A new study conducted by researchers at the University of Eastern Finland (UEF) shows that skillful voice impersonators are able to fool even state-of-the-art speaker recognition systems.
Voice attacks can be done in various ways: using speech synthesis, voice conversion, and replay attacks. Although new techniques and countermeasures against technically generated voice attacks are developed regularly, techniques against voice modifications produced by a human are sorely lacking. In fact, even state-of-the-art speaker recognition systems are not efficient in recognizing voice modifications.
The UEF study found that voice modifications such as impersonations and voice disguise can fool speaking recognition systems quite easily. Analyzing speech from two professional impersonators who mimicked eight Finnish public figures as well as an acted speech from 60 Finnish speakers, the researchers found that impersonators were able to fool automatic systems while mimicking some speakers. As for acted speech, the best strategy for voice modification was to sound like a child.

Reference:
University of Eastern Finland via ScienceDaily (https://www.sciencedaily.com/releases/2017/11/171114104831.htm)

Coming Soon: The Ultimate, Atomically Thin Defense Against Hackers

Researchers at New York University Tandon School of Engineering have developed a new technology that could become the next generation of electronic hardware security. The new tech is a novel class of unclonable cybersecurity primitives made of nanomaterial that possesses the highest ability of structural randomness.
The new technology represents the first proof of complete spatial randomness in atomically thin molybdenum disulfide (MoS2). When it comes to this kind of security, randomness is extremely desirable as it ensures encryption and therefore secure computing.
The team created the material by growing it in extremely thin layers (each about a million times thinner than a human hair). As Assistant Professor of Electrical and Computer Engineering Davood Shahrjerdi explains, by varying the thickness of each of these layers, the researchers tuned the size and type of energy band structure. This tuning is what affects the properties of the material and therefore enables its structural randomness.
Shahrjerdi notes that the new material is unique because at its monolayer thickness, it possesses the optical properties of a semiconductor that emits light, but at multilayer, its properties change and it no longer emits light. So, when exposed to light, the material’s patterns become a one-of-a-kind authentication key that secures hardware components.
But that’s not all – the new material is also inexpensive, requires no metal contacts and can simply be applied to a chip or other hardware component like a stamp to a letter.

Source:
NYU Tandon School of Engineering via ScienceDaily (https://www.sciencedaily.com/releases/2017/11/171129090409.htm)

New Study Shows Websites Track User Activity, Ignoring Privacy Settings

It’s no secret that many websites monitor user activity, their browsing history, and their preferences. This is done so that companies can target their audience and create tailored advertisements. But did you know that some websites also track a user’s mouse movements and their input into a web form even before it’s submitted? According to the results of a new study conducted by Princeton researchers, this is done by some of the world’s top websites.
The research found that hundreds of companies use third-party tracking services so they can monitor exactly how users navigate their websites. For example, a service like this can see a user enter a password into an online form. Websites logging keystrokes is also a thing – services monitor and see what users type even before they submit it and even if they later abandon their input.
This all is a privacy breach because users are not aware that their online behavior is being monitored. But what’s even more worrying is the fact that this information can easily be abused: “Collection of page content by third-party replay scripts may cause sensitive information, such as medical conditions, credit card details, and other personal information displayed on a page, to leak to the third-party as part of the recording,” explains the study’s co-author Steven Englehardt.
To improve online security, it’s important to help users better control how their information is shared online. However, that’s only a part of the solution – companies which use third-party tracking services that ignore privacy settings should be held accountable.

Reference:
Phys.org (https://phys.org/news/2017-11-websites-privacy.html)

Design a site like this with WordPress.com
Get started