Privacy on social media

No ratings yet.

A study I recently read focused on teens using social media. The teens tent to put the settings on ‘private’, but they have a large network of so called friends with whom they still share their every move. Teens today are sharing far more information than 10 years ago. A possible cause for this, is the evolution of platforms. If a friend has a profile on, for example, Facebook, you definitely should create one too, right?!

There seems to be not much difference between boys and girls about sharing their information, except for sharing a phone number with Facebook. Girls are more hesitant to share their phone number, where boys seem to not really care and share it anyways. The study also showed that Facebook is more popular with teens, with an average of 300 friends. On Twitter, the average amount of followers is ‘only’ 79.

The teens indicate that they don’t have any concerns about third parties getting access to their data. They say that the privacy settings on Facebook are “not difficult at all” to use. May I question if they really know what they are doing? As a Facebook user myself, finding the privacy settings is easy. But managing all parties having access to my data on Facebook is a whole other story. By logging in to websites or apps, using my Facebook, the third party gets my information. What are they going to do with that information? I don’t know… What is Facebook doing with my information? Probably a lot that I don’t want to know (yes, I should care but I find it too scary to know what companies are doing with my information). The teens that say that they are confident that their information is save because of the “privacy settings” should look around in the media and listen to what Facebook, and other social media websites, are doing.

Please rate this

Smart security camera. Safe or Unsafe?

No ratings yet.

The world gets more and more connected, people buy and bring more smart devices into the home. These devices provide the consumer with many benefits and enhance consumer’s life in a convenient way. However there is a dark-side, the security of the internet as a whole does not improve. With the internet of things, we introduce the vulnerabilities of the digital world into our own lives.

In October 2016, because of a DDoS attack, many large websites were inaccessible at the east coast of the United States. It wasn’t the first time that such an attack paralyzed websites, however the source of the attack was new: thousands of hacked ‘things’ from the internet of things, such as routers, smart security cameras and hard-disk recorders. Thus, things that normally should provide protection are now being a possible threat for your privacy.

The internet of things includes all kinds of devices that are able to make a connection with the internet. These are more than smart appliances such as, thermostats, fridges and lamps. For example smart toys do exist, such as a smart Barbie, that are able to make connections with the internet. And precisely these devices are prone to malicious parties.

According to manager Robert den Drijver of security company Symantec, the reason that cyber criminals increasing the search for unsecured IoT devices, is because they are easier to hack than smartphones and personal computers. Most of the time these are devices with minimal protection, but are offering a particular type of bandwidth. This bandwidth is necessary to send large amount of data traffic with a DDoS attack.

Designing smart devices often lack the priority of security. You could ask yourself what a hacker could do with your smart lamp, how can they benefit from it? These devices do not hold any valuable information. So you think. According to safety experts, it is possible to use insecure smart devices to send a flow of internet traffic to a random server. Possibly, criminals can use these attacks to infiltrate deeper into the network and still capture any valuable information.

Another threat is that ransomware makes it entrance into the internet of things. Ransomware  is a malicious software that hostage a device and asks for a payment. Currently this does not have yet occurred, however it is reasonable to think that criminals will use this method to trick out money from the consumer. Therefore it is important that the security of smart devices will be improved or that insecure smart devices will be banned from the market if it does not live up to certain security standards. Because if your Tesla got infiltrated by ransomware while you are driving. “Transfer now 100 dollars, otherwise we won’t unblock your brakes.”

Measures to defend

At first, it is important to change standard passwords of routers and other smart devices. Is it possible that a device can be reached at a distance, but you do not use that functionality? Disable it. An even better recommendation is to make use of a special Wifi-network for guests, in which you can connect all smart devices. According to Robert den Drijver, such a network can be shut down from the main network, which holds many sensitive data. However, there are not many ways for consumers to protect themselves. These steps can be taken and you can be careful with buying smart devices and toys. The consumer is not responsible to a lot of these insecurities. Many smart devices, in particular cheap devices developed by smaller companies, are abandoned once they are sold in stores.

Therefore it is necessary that governments takes action and set international industry standards for the security of smart devices. Minimal security standards, which should be included in the development before market entrance. Just like there are particular standards in the car industry (ABS or belts), there should be standards where it is for example possible to change the password or set reliability labels for IoT devices.

For such a regulation an international approach is necessary. It won’t be useful if just one country has more strict rules about the security, while other countries are still vulnerable to hackers. There are no borders for cyber criminals.

Greenberg, A. and Zetter, K. (2015) How the Internet of Things got hacked, available online from: https://www.wired.com/2015/12/2015-the-year-the-internet-of-things-got-hacked/ [7 December 2016].

Kraan, J. (2016) Die slimme deurbel is gevaarlijker dan je denkt, available online from: http://www.nu.nl/weekend/4342541/slimme-deurbel-gevaarlijker-dan-denkt.html [7 December 2016].

Please rate this

Dutch National Police Experiments with Augmented Reality

No ratings yet.

The Dutch national police is experimenting and developing applications with augmented reality for forensic investigation. This will enable forensic researchers to provide the police force with live support. Through a smartphone application it is possible to add digital information (images and videos) to a crime scene. Currently a special camera system is being tested by the Nederlands Forensisch Instituur (NFI) in collaboration with the TU Delft.

With augmented reality our perception of and interaction with the real world can be enhanced. This is different from virtual reality systems, which replaces the real world with a simulated one. Augmented reality on the other hand combines aspects from the real physical world with computer generated visual, audio and haptic signals.

This combination of the virtual with the physical world will benefit the police force. This will enable a police officer to request and view additional information through an application or glasses about the environment. Moreover, other collaborating parties can watch at distance and provide the police officer with any beneficial support. The Dutch police force is doing this by placing a camera on the shoulder of the police officer, which films what the cop is seeing. On the wrist of the officer is another smartphone, which displays the camera-images. This will enable the officer to circle evidence or make notes of the situation. According to Martin Roos of NFI, a forensic researcher can say for example where to take samples of, read blood trails or send them. This will give greater access to the crime scene than for example explaining and describing the situation by telephone.

However not only on the crime-scene augmented reality can be used. With the use of virtual reality glasses it is possible to experience the situation in another environment. This will enable forensic detectives, police officers, judges and lawyers to build 3D reconstructions. In this sense a judge or jury can experience, observe and judge the situation by themselves more easily.

Another example which is being tested by the Dutch national police is the usage of the augmented reality glasses HoloLens by Microsoft. This glasses will project holograms on the physical world. This will enable for example police officers to view and follow arrows to the crime scene.

However, this technology is still very new and young and there are still some legal and technological challenges to overcome. For example the usage of augmented reality must be waterproof before it can be used as evidence in court. Anyway, this technology have many applications in a few years, that will benefit both the police force as the security of the Dutch residents.

NU.nl (2016) Politie verkent mogelijkheid van forensisch onderzoek in augmented reality, available online from: http://www.nu.nl/gadgets/4361670/politie-verkent-mogelijkheid-van-forensisch-onderzoek-in-augmented-reality.html [7 December 2016].

Roesner, F., Kohno, T. and Molnar, D. (2014) ‘Security and Privacy for Augmented Reality Systems’, Communication of the ACM, 57(4): pp. 88-96.

Security.nl (2016) Politie experimenteert met augmented reality, available online from: https://www.security.nl/posting/495344/Politie+experimenteert+met+augmented+reality [7 December 2016].

 

Please rate this

Police use of social media – another controversy

Maybe you knew or maybe you never thought about it but police forces are using social media to prevent crime and catch criminals. There are already successful cases solved with the use of social media. This is great right?

First of all, social media is a very powerful tool to interact with the public. By posting on Facebook or Tweeter, police forces build trust and confidence. Furthermore, social media enables law forces to share specifically targeted information quickly, easily and cheaply.

Secondly, social media helps raise engagement with the public by providing the police with a way to connect and build relationships with local communities and “hard to reach” groups. This way citizens can be more motivated to report a crime by sending a simple message.

Lastly, social media enables police officers to monitor suspects in an environment where they feel free to express themselves. Also, they can also take action (such as occupying certain areas) when social network posts relate possible risk situations.

However, social media use by the police force is in a grey area regarding privacy issues. Many police officers create fake accounts on Facebook, some even using beautiful women pictures, in order to befriend suspects or to pose as members of certain communities. A study made by LexisNexis showed that out of 1221 federal, state, and local law enforcement agencies that use social media, more than 80% of the responding officials consider that social media is a powerful tool to combat crime and believe that creating fictive profiles on social media for this activity is ethical. However, this is a clear violation of Facebook policies. Police representatives argue that a fake Facebook profile is like an undercover mission, thus it is just a means to achieve a better good and there are already stories about successful cases that could be closed only by using this approach. Facebook did not comment on the matter but stated that every users should be aware and report fake profiles.

Furthermore,  what about searching to the personal messages? Although Facebook did not publish any documentation on its “crime prevention program”, its use allows certain parties to search through private conversations and sets alerts on certain keywords that predict a possible aggressive behaviour. For example, a child abuser was caught before a meeting with his victim, because the program detected he befriended a 12 year-old and used inappropriate language.Thinking about crime prevention this could be indeed beneficial but can software really detect true intentions out of conversations? People say a lot of things when upset or angry but it’s usually a long way from thoughts to actions. Also, without a proper regulation privacy can be violated for other purposes.

All in all not that great, is it? With the emergence of Internet, privacy concerns continue to rise and sooner rather than later, there should be stricter regulations and policies to protect online-users from privacy violation. This would be ideal, but is it really possible to put limitation on such a giant network? It really is a topic to be thought of.

References:

http://www.police-foundation.org.uk/uploads/catalogerfiles/police-use-of-social-media/Social_media_briefing_FINAL.pdf

http://www.businessinsider.com/police-make-fake-facebook-profiles-to-arrest-people-2013-10?international=true&r=US&IR=T

http://www.nydailynews.com/new-york/new-york-police-dept-issues-rules-social-media-investigations-article-1.1157122

http://www.reuters.com/article/us-usa-internet-predators-idUSBRE86B05G20120712

https://www.accenture.com/cz-en/~/media/Accenture/Conversion-Assets/DotCom/Documents/Global/PDF/Industries_9/Accenture-Are-Police-Forces-Maximizing-Technology-to-Fight-Crime-and-Engage-Citizens.pdf

i-Spy Toys

4.67/5 (3)

I just read about two ‘smart’ toy dolls that have been taken out of the shelves due to privacy concerns. The toy dolls i-Que and My Friend Cayla of Genesis Toys are able to use speech recognition to listen and react to kids. However, a privacy concern arised because the voice recordings of the children were stored on servers to be analyzed, without their ‘consent’.

From a legal perspective, Nuance (the organization collecting the data) did not break the law. In their terms and conditions, they vaguely described what the consumer signed up for. However, according to a survey done by the Guardian, just 7% of all people read the full terms and conditions when buying a product or service online.

The 18 privacy organisations complaining about the toys suggested that the data can easily be used for other purposes other than voice recognition, such as advertising or making the recordings available to the police. Furthermore, they state that the bluetooth connection of the toys is not secure, making it easy for people to spy on kids.

Now the first thing I was thinking about when reading this article, was the link to the Fair Information Policies, but I found it very hard to decide whether Nuance was acting in an ethical way. When going through the Fair Information Policies, the only point on which I could label Nuance as ‘non-ethical’ is that the person is not able to find out what exact information is in record, since it is stored on the Nuance servers and not accessible to the person themselves. However, I think the other policies do not apply or are probably covered in their vaguely described terms and conditions.

So, the point I’m trying is to make it that I believe the terms and conditions as we know it must make room for a more simplistic version. If only 7% of all people read the terms and conditions, we must accept the fact that they simply do not serve their purpose in the form they are in now. Especially because of the impact that terms and conditions can have on what companies are allowed to do, just because a consumer did not take the time to read through 20 pages to find the ‘juicy’ conditions somewhere buried in the standard (boring) conditions.

I’m not sure what ‘new form’ would be most suitable here, but I do think that in this digital age (where apparently even your shoes are getting an IP address), it would be nice to have some more clarity on what data organizations are collecting, and how they are using it.

https://www.theguardian.com/money/2011/may/11/terms-conditions-small-print-big-problems

http://www.nu.nl/gadgets/4361057/intertoys-en-bart-smit-halen-slim-speelgoed-winkels-privacyzorgen.html

Please rate this

Internet connected toys suspected of spying on kids

No ratings yet.

Privacy is becoming an issue for the internet of things topic. However, a more unexpected field are internet connected toys. Over 18 privacy groups have been or are filing complaints with the European Union as well as the US Federal Trade Commission concerning Genesis Toys and speech recognition company Nuance for deceptive practices and violating of privacy laws. It is argued that i-Que and My Friend Carla, both pictured, do not only capture voices without notice or approval, it is also not clear what Nuance does with the information that is sent. As an added problem, the organizations are also accusing the companies of not making sure that other Bluetooth connected devices cannot access the toys. Evermore, if not properly managed the speech information that is recorded and sent to nuance could be sold to third parties. There is even another problem that hackers could gain access to these products and the microphones in those devices. Future scenarios could even go as far as “predatory stalking and physical danger”. All in all concerns are plenty, and stakes are high. However, chances are that speech recognition is going to be used more and more in future toys, especially in dolls.

It is unsure yet whether and to what extend the European Union and the US Federal Trade Commission are going to do something about these practices in themselves. It is extra complicated as these products are marketed to kids, who are obviously less able or responsible to manage privacy concerns themselves.

I am curious about how you think about these toys developments. Do you think we should ban them or develop rules? Then again, if we develop rules, how can we enforce them? And in the case of hacking, how should we manage the security of such hardware and the software behind it? Please comment below.

Please rate this

Microsoft predicts that the search bar will disappear by 2027

No ratings yet.

As future business architects or consultants, a disappearance of the search bar would have a major influence on your job and the company you will work for. Questions you would have to ask yourself as soon as you get such jobs would be: How does the role of Search Engine Optimization (SEO) change? How to restructure a company for that future? What will be important instead?

You better already start thinking about this. Microsoft predicts that the search bar will disappear by as soon as 2027. It is fueled by 17 opinions of Microsoft employees, which you can find here: http://blogs.microsoft.com/next/2016/12/05/17-17-microsoft-researchers-expect-2017-2027/#sm.0000hm568u146rf2qto6iihe62pv2.

In 2017 deep learning in information retrieval will already be matured, according to one of their scientists. Over the last years there have been breakthroughs in speech and image recognition and natural language recognition, which already fuels the capabilities of search. But in 2027 it will make for real change. Search will become more “ubiquitous, embedded, and contextually sensitive.” Next to that it will be even more relevant to “current location, content, entities, and activities”, replacing the limited output design of a search bar and website. It is argued that we are seeing the beginnings of that now happening in homes, with devices that answer to spoken queries such as Google Home and Amazon’s Alexa. The capabilities and smartness of those devices will increase along the way adding for example video capabilities and becoming better in their own context at home.

 

All in all the way we will consume and create information will completely change. What do you think will be the most important technology changes to fuel this transformation? How fast do you think this transformation will happen? How do you think it will impact Search Engine Optimization?

Please comment below with your ideas.

 

Sources:

17 for ’17: Microsoft researchers on what to expect in 2017 and 2027

http://www.theverge.com/2016/12/5/13841882/microsoft-research-predictions-2027-search-bar-ai-climate-change

 

Please rate this

AI and Us

5/5 (1)

In our day to day live we experience that our smartphones have some artificial intelligence (AI) embedded in the form of a personal assistant. This assistant, be it Siri or Google Now, can perform tasks that include looking up and then presenting information verbally. Other features include being able to dictate a text message or creating a calendar entry.

Sci-Fi movies gave us the idea of being able to talk to an AI like we would talk to a real person. The AI’s in the movies, however, have the processing power and memory of a server farm (or even quantum computers) at their disposal.

To create an AI that we regard as intelligent, we have to consider what we consider intelligent behavior. Since we want to create an AI that matches our intellect, we should look at the most intelligent species we know, humans. Humans are particularly good at recognizing patterns. We can train to recognize certain shapes faster, e.g. in mathematics or even art. Computers, however, must be taught to categorize patterns according to what we teach them.
Teaching the computer these patterns to imitate the capabilities of the human brain, is called deep learning and thus creating an AI.

Now that we know what the goal is, what does that mean for businesses? Is AI important to be a part of the digital mastery?

Certainly, companies like Facebook or Google are working on this technology with remarkable results in image and speech recognition.

Other markets are also following the trend of having an intelligent bot at your side, for tasks that seem too complicated or intensive for us.

One sector that follows mathematical rules and where the lifeblood is what a computer knows to work with are financial systems and the tremendous amount of data they process daily. So far they are mainly used to serve customers, much like a sophisticated chatbot (e.g. SEB in Sweden, Royal Bank of Scotland) that answers the questions of customers. Paypal uses the technology to categorize types of fraud, while in Korea an AI delivered a 2 percent return on invested funds. In the automotive industry, we are starting to have very sophisticated autopilots. Assembly lines are more and more staffed with robots that are faster and more reliable than human staff. The possibilities to implement a powerful AI seem endless.

But what about the other side of the coin? How far do we go when we are not limited by processing power or other resources anymore? Why do leaders in their field such as Elon Musk, Bill Gates, and even Stephen Hawking warn about AI? What does that mean for the concept of “business”?

Recommended readings:

https://www.linkedin.com/pulse/could-regulation-put-brakes-digital-economy-marcel-nickler

Stephen Hawking, Elon Musk, and Bill Gates Warn About Artificial Intelligence

The fear index by Robert Harris

Please rate this

Will earphones destroy our privacy?

No ratings yet.

The digitizing of products, processes, and businesses have changed the way we live, and will keep on doing this in the future. But besides all the advantages of this new digital age, more and more downsides arise. One of the things that is always connected to digitizing is privacy. Companies track our behavior through the collection of large amounts of data, and we barely know what is done with all our personal information.

 

However, there is a new privacy threat that we need to be aware of. First, it was disclosed that hackers could take over your laptop camera. Since then, even Facebook CEO Mark Zuckerberg and FBI Director James Comey cover their laptop camera with a sticker. However, another leakage in laptops is found. Researchers from the Israeli Ben Gurion-University have discovered a way to wiretap people using their earphones, even if the earphones are not provided with a microphone. Using malware, the researchers were able to detect vibrations of earphones. The output channel is flipped into an input channel, which turns your earphones into an unpowered microphone.

 

Even when you are not using earphones, it is possible to tap your laptop. Also speakers can be flipped to work as a microphone. Again, vibrations are collected and transmitted into electromagnetic signals. This is exactly the opposite of what speakers normally do.

 

The hack is currently known for the audio chips produced by RealTek. Audio chips from this company are used in almost all laptops, desktops, and pcs. RealTek did not commented on the discovery, but researchers are convinced that a simple software update will not be the solution to this problem.

 

Hence, the digital world brings us a lot of advantages, but also a lot of downsides which affect our privacy. While you cannot mitigate this vulnerability permanently, you could certainly notice it as your headphones would no longer play audio if the port is configured as input.

 

http://thehackernews.com/2016/09/hacking-webcam-cover.html

https://www.pcper.com/news/General-Tech/Have-tape-over-your-webcam-Might-want-fill-your-headphones-wax-well

http://www.nu.nl/gadgets/4355021/onderzoekers-kunnen-computergebruikers-afluisteren-via-koptelefoons.html

 

Please rate this

Deseat.me; first step towards less privacy problems of digitalization?

No ratings yet.

Privacy is one of the key concerns around the digitalization. “If you’ve got nothing to hide, you have nothing to worry about.” This line is too often used in defending surveillance overreach. In my opinion this is a narrow way of looking at privacy, especially when looking at all the problems that have been around governance data collection and use of data beyond the surveillance. It seems logical that a solution needs to be created for these privacy problems.

Looking into this deeper I found that some developers are already working on applications to reduce privacy issues. Two inventors from Sweden already set the first steps towards reducing the amount of knowledge the world has about you. The website Deseat.me can make the difference. This site makes a list of all the accounts that your have after you have activated your Google account. After the list is created, you are allowed to select old and useless memberships. With only one click you can delete these profiles. This is an easy way to stop endless newsletters and sleeping memberships that pollute your inbox while you never look at them.

While this website is mostly focused on enabling people to delete useless accounts, in my opinion it also has some influence on privacy issues. Nowadays we sign up for an endless amount of websites because we have to in order to see specific content or because it makes our lives easier. It is impossible to remember all the specific website and applications you signed up for, so there might even be one or two that you do not want to be associated with anymore. It is only a small step in the right direction, but there is definitely a big opportunity here. I think many people would be interested in a website like this, at least I am.

Sources:

Met deze knop kun je jezelf van het internet wissen


http://www.computerweekly.com/opinion/Privacy-concerns-in-the-digital-world

Please rate this