Infosec Island Latest Articles Adrift in Threats? Come Ashore! en hourly 1 Every Business Can Have Visibility into Advanced and Sophisticated Attacks Mon, 18 Jun 2018 01:15:20 -0500 Years ago, senior managers of large organizations and enterprises were primarily preoccupied with growing their businesses, forming strategic alliances and increasing revenue. Security, mostly left to IT departments, was usually regarded as a set-and-forget solution that was in place for either compliance purposes or to prevent permanent damage within the organization’s infrastructure.

Fast forward several years, and organizations have woken up to the cold reality of data breaches, malware outbreaks, and hefty financial penalties because of increased sophistication of threats and inadequate security measures implemented by organizations. Since 2013, hacks and data breaches have not only flooded the main stream media, but have also shown just how ill-prepared organizations really are when dealing with them.

Equifax, Yahoo, the US and French election scandals, Wannacry, NotPetya, BadRabbit, and Uber are among the most memorable events in recent cybersecurity history. Equifax lost over 30 percent of its market value, which is about $5 billion. Verizon saved $350 million when buying Yahoo, because of the massive data loss scandal. Cyberattacks are bad for businesses, and their consequences bring cyber risk to the top of the minds of senior executives.

Quantifying the impact of cyberattacks

While decision makers and senior executives prefer hard numbers when quantifying the impact of a cyberattack, it’s worth noting that the traditional method of assessing breaches is somewhat flawed. Simply looking at the direct costs associated with the theft of personal information is no longer enough, especially with GDPR threatening heavy penalties for the breach of customer or employee records.

For a complete view on the impact of cyberattacks, organizations need to look beyond the theft of intellectual property, the disruption of core operations, and the destruction of critical infrastructure. They need to start factoring in hidden costs that revolve around insurance premiums, lost value of customer relationships, value of contract revenue, devaluation of brand, and the loss of intellectual property.

Understanding the Change

To understand how things have changed, organizations need to look at the cyberattack kill-chain that most advanced and targeted attacks employ to breach an organization’s infrastructure.

Reconnaissance, the first stage, involves threat actors selecting a target, researching it, and attempting to identify vulnerabilities in its infrastructure. Weaponization is the process in which threat actors create or repurpose malware and exploits to breach the target organization. Delivery and exploitation involve transmitting the cyber weapon to the target, either via email attachments or infected websites, and exploiting a vulnerability in a target program on the victim’s endpoint. The last three stages usually involve the installation of access tools that allow the malware to connect to a C&C (Command and Control) server to let the intruder gain persistency into the targeted infrastructure, and conclude with data exfiltration, data destruction, or whatever actions on objectives threat actors had in mind when targeting the organization.

The obvious goal is to break the attack kill-chain before it reaches the actions on objectives phase. As such, endpoint protection platforms (EPPs) have predominantly focused on disrupting the first four steps of the kill chain, preventing threat actors from installing malware on the targeted endpoint. However, prevention is never 100% bulletproof.

The most radical change companies have made in recent years to address this, is implementing solutions that improve the ability to quickly detect and effectively respond to these types of targeted attacks. This is where the Endpoint Detection and Response solutions (EDR) come in.

Breaking the Unbreakable Shield

In recent years, EPPs were commonly regarded much like Captain America’s shield -- one of the Marvel Universe’s most resilient and almost invulnerable objects. However, on rare occasions, the shield—though it was designed to be indestructible—has failed to protect Captain America. Even though villains with such powers are few and far between, it can happen, just like with an advanced, targeted cyberattack breaking through an EPP.

Similarly, no matter how seriously a company takes security and regardless of what state-of-the-art tools it’s using to prevent cyber-attacks, prevention doesn’t work 100% of the time, especially for sensitive industries or high-profile organizations which are targets of very advanced and persistent attacks. The attacks that manage to elude prevention are typically very insidious, incredibly difficult to detect, and highly damaging to organizations.

Companies need to improve their ability to quickly detect and effectively respond to these types of attacks, investigate incidents for scope and impact, limit damages, and fortify themselves with an enhanced security posture against future attacks.

EDR tools help companies achieve these objectives and are focused on detecting security-related events and incidents, while providing strong instruments for investigation, and capabilities to appropriately respond to incidents. Therefore, in context of the increasing number and sophistication of attacks, the importance of EDR solutions for companies is growing quickly.

Building a Security Ecosystem

Building a strong security ecosystem is about having both the shield and the sword working together to increase the overall security posture of the organization. Integrated EPP and EDR means evolved security over time. A strongly integrated platform will enable security teams to incorporate the threat intelligence into improving the security posture of the organization, by adapting security policies to block identified threats or by eliminating vulnerabilities through security patching. A platform developed from the ground up as an integrated solution enables superior operational effectiveness. It’s faster and cheaper to acquire, easier to deploy, consumes less endpoint resources and saves time for the security team.

Having all these built into a single platform can help provide enterprises with prevention, detection, automatic response, threat visibility and one-click resolution capabilities to accurately defend against even the most sophisticated cyber threats and to be prepared even if their virtually invincible shield cracks.

Copyright 2010 Respective Author at Infosec Island]]>
4 Cybersecurity Tips for Staying Safe During the World Cup Wed, 13 Jun 2018 04:20:00 -0500 The World Cup is only days away and everyone is on their way to Russia or simply planning when they will stream the games they care most about online.

When it comes to traveling, it is critically important to know how cyber criminals target their victims, what travelers can do to reduce the risk, and ways to make it more challenging for attackers to steal their important company or personal information, identity or money.

As the first games approach, here are four cybersecurity best practices that you can use to stay safe during the 2018 World Cup.

1.  Don’t lose your data, stay protected and relax.

While on vacation or at the World Cup, it is a common place for things to get lost, misplaced or stolen. It can happen in an instance by simply forgetting your laptop on the bus or the taxi, or by being distracted chasing after your children – all while someone else walking away with your tablet or laptop. Whether it’s your personal or company laptop this can lead to major security risks, compromising your data. Realistically, this is the last thing you want ruining your trip.

Tip: Backup, update and encrypt. Before you leave for the World Cup, make sure you back up all devices and data. Double check that all security updates are applied, and finally check your security settings. For example, ensure your sensitive data is encrypted. 

2. Beware of social logins and limit the use of application passwords.

Almost every service you sign up for while on such trips now requests you connect using your social media accounts to gain access to whatever it is you are trying to do. The problem with using your social media account for these services is that you are providing and sharing personal details about yourself. This means you are giving these services the ability to continuously access your location, updates and personal information. 

Tip: Use unique accounts, rather than social logins as those accounts get compromised, and cyber criminals could cascade to all the accounts using the social login.

3. Beware of what you do over public Wi-Fi.

Always assume someone is monitoring your data over public Wi-Fi. Do not access your sensitive data, such as financial information over public Wi-Fi. Do not change your passwords and beware of entering credentials while using public Wi-Fi.  If you have a mobile device with a personal hotspot function, use this over public Wi-Fi. During vacations, it can be expensive if you decide to use the highly expensive data roaming options from telecommunication companies, so when using public Wi-Fi during vacation always make sure to use it with caution, securely and with the following tips in mind. 

Tip: Do not use a public Wi-Fi network without a VPN. Instead, use your cell network (3G/4G/LTE) when security is important.  When using public Wi-Fi, ask the vendor for the correct name of the Wi-Fi access point and whether it has security. It is common for cybercriminals to publish their own Wi-Fi SID with similar names. Disable Auto Connect Wi-Fi or Enable “Ask to Join Networks.” Many cybercriminals will use Wi-Fi access points with common names like “Airport” or “Cafe” so your device will auto connect without your knowledge. Do not select to remember the Wi-Fi network.  Use the latest web browsers as they have improved security for fake websites. This prevents someone from hosting their own websites, like Facebook, waiting for you to enter your credentials. Do not click on suspicious links - even via social chats like videos that contain your photos - and beware of advertisements that could direct you to compromised websites. Use a least privileged user or standard user while browsing, as this will significantly reduce the possibility of installing malicious malware. 

4. Before “clicking,” stop, think and check if it is expected, valid and trusted.

We are a society of clickers; we like to click on things (like hyperlinks for example). Always be cautious of receiving any messages with a hyperlink. Before clicking, ask yourself – “Was this expected?”, “Do I know the person who is sending this?” On occasion, check in with the actual person if they sent you an email before you aimlessly click on something in which might be malware, ransomware, a remote access tool or a virus that could steal or access your data. Nearly 30 percent of people will click on malicious links and we need to be more aware and cautious.    

Tip: Before clicking, stop and think. Check the URL, make sure the URL is using HTTPS. In addition, check if the URL is coming from a legitimate source. Discover where the hyperlink is taking you before you click on it as you might get a nasty surprise.       

The World Cup is a time to relax and enjoy the amazing games. It can be a great experience as long as you stay safe while attending (or watching online). If followed, these best practices will help you avoid becoming the next victim of cybercrime.

About the author: Joseph Carson is a cyber security professional with more than 20 years’ experience in enterprise security & infrastructure. Currently, Carson is the Chief Security Scientist at Thycotic. He is an active member of the cyber security community and a Certified Information Systems Security Professional (CISSP).

Copyright 2010 Respective Author at Infosec Island]]>
Machine Learning vs. Deep Learning in Cybersecurity – Demystifying AI’s Siblings Wed, 13 Jun 2018 00:19:52 -0500 Beginning in the 1950s, artificial intelligence (AI) was used as an umbrella term for all methods and disciples that result in any form of intelligence exhibited by machines. Today, nearly all software in every industry – especially in security – use at least some form of AI, even if it is limited to basic manually coded procedures. ESG research found that 12 percent of enterprise organizations have already deployed AI-based security analytics extensively, and 27 percent have deployed AI-based security analytics on a limited basis. It is expected that these implementation trends will only gain momentum in 2018.

During the past few years, the major subsets of AI – machine learning and deep learning – have progressed, transforming nearly every field they touch. Nowadays the terms “artificial intelligence, “machine learning,” and “deep learning” are used widely, however differentiating between the three, and knowing which is best for your business goals, can be confusing. To fully understand each term and what they mean, it’s worth taking a look at each subfield’s advantages and limitations.

The Challenges of Machine Learning

For the last 25 years, machine learning was the leading sub-field within AI. The technology allows computers to learn without being explicitly programmed and in the 2000s, machine learning methods completely dominated AI by outperforming all non-machine, learning based results.

Despite its success, the technology comes with obstacles, especially when applied to security. One of the major limitations of traditional machine learning is its reliance on feature extraction, a process through which human experts dictate what the important features (i.e., properties) of each problem are. This means that in order for a machine learning solution to recognize a malware, experts need to manually program the various features that are associated with a malware. For the cybersecurity field in particular, this means that solutions are limited in detecting unknown attacks. Due to the need for humans to define specific features, the features of attacks that haven’t been revealed yet still need to be analyzed, leaving them unable to be detected.

However, this reliance on human involvement introduces one of the biggest challenges of machine learning – the potential for human error. Given feature engineering requires a human domain expert to define features – features can often be overlooked. In thinking about the example of the malware given above, if during programming certain characteristics are omitted, the system breaks down. In order for a machine learning system to be accurate, human domain experts must be methodological in defining features, and continuing to define them. This is because machine learning is a linear based model, meaning the features selected by a human domain expert can only lean on simple linear properties. Given these confines, companies have been shifting to deep neural networks (DNN) to better secure their infrastructures and prepare for impending attacks.

Deep Learning Evolves

Deep Learning, also known as deep neural network, is a sub-field of machine learning, and takes inspiration from how our brains work.The big conceptual difference between deep learning and traditional machine learning is that deep learning is capable of training directly on raw data without the need for feature extraction. For example, when applying machine learning to face recognition, the raw pixels in the image cannot be fed into the machine learning module, but instead they must first be converted into features such as distance between pupils, proportions of the face, texture, color, etc. On the other hand, deep learning is capable of training directly on the raw data without any need for feature extraction.Additionally, deep learning scales to hundreds of millions of training samples, and continuously improves as the training dataset becomes larger and larger.

Over the past few years, deep learning has reached a 20-30 percent improvement in most benchmarks of computer vision, speech recognition, and text understanding – the greatest leap in performance in the history of AI and computer science. This is in part due to deep learning’s ability to detect non-linear correlations between data that are too complex for humans to define. Unlike traditional machine learning, deep learning supports any and new file types and has the ability to detect unknown attacks, a huge benefit to cybersecurity.

While these advantages surpass those of machine learning based solutions, deep learning does face some challenges. Researchers work with a very large data sample of millions of files to train the neural network and are dealing with highly complex algorithms. In many cases, deep learning is an “art” that relies on scientist’sexperience and knowhow, and unfortunately there is a scarcity of experts available.

The Impact of Deep Learning on Security

Deep learning has been implemented across a variety of industries making a big impact, especially in cybersecurity. The biggest malware attacks of 2017 – think WannaCry, NotPetya, DDoS incidents – made companies rethink their security strategies and reactive approach to future attacks. Throughout the cybersecurity industry, there is an ongoing need to respond to cyberattacks in real-time with minimal human interaction. As a result, organizations are turning to deep learning-based solutions due to the fact that eliminates human interaction.

Deep Learning’s ability to prevent new, never before seen malware in real-time without any human involvement, all while maintaining low false positive alerts, is a huge benefit to securing enpoint, mobile devices, data and infrastructures. After the malware is prevented, deep learning technology helps companies understand what kind of malware it is i.e. ransomware, backdoor or spyware to take further security actions needed. In most cases this takes experts to properly analyze the information, however deep learning software identifies and analyzes the data automatically, without any need for human involvement.

Similarly, the technology can be leveraged to determine where a specific attack originated from. In the past, this has been a difficult task for IT and Security teams to do for a variety of reasons. For example, each nation-state has usually more than one cyber unit that develops such advanced malware, rendering traditional authorship attribution algorithms useless. In addition, APT’s use state-of-the-art evasion techniques. However, DNN has the ability to learn high level feature abstractions of the APTs itself.

It will be exciting to observe deep learning’s continued success in security throughout 2018, and it won’t stop there. Beyond security, deep learning is revolutionizing many other industries, from climate mapping to combatting aging and disease – the implications of the technology are far reaching.

About the auyhor: Mr. Caspi is a seasoned CEO and leading global expert in cybersecurity, big data analytics and data science. A pioneer technologist by the world economic forum in Davos.

Copyright 2010 Respective Author at Infosec Island]]>
Building a Strong, Intentional and Sustainable Security Culture Sun, 10 Jun 2018 23:42:00 -0500 Here is the big idea: your security culture is – and will always be – a subcomponent of your larger organizational culture. In other words, your organizational culture will “win out” over your security awareness goals every time unless you are able to weave security-based thinking and values into the fabric of your overarching organizational culture. But how do you achieve this to ultimately build a strong, intentional and sustainable security culture? There are four secrets to success.

1.Take stock of where you are and where you are going

Without a plan and a path, you are sure to get lost! The key to implementing secret #1 is to leverage a framework to help ensure that you are approaching things in a structured manner, rather than simply making it up as you go. Especially in large global organizations, I recommend conducting a series of interviews or quick surveys to understand how different divisions and divisional leaders view security, understand policy and best practices, and what they truly hold important. It also helps you understand if your key executives are in alignment and if there are some political or logistical hurdles that you need to work through as you build your plan.

With this background knowledge, you can begin to create your goals for the year. I like the SMARTER goal setting framework proposed by several productivity gurus. There are a few different versions of the SMARTER framework – one I recommend is the Michael Hyatt version– a bit more on the topic can also be found here. (SMARTER = Specific, Measurable, Actionable, Risky, Time-keyed, Exciting, Relevant.)

2. View security awareness through the lens of organizational culture

Organizational culture and security culture are not one in the same. However, they need to be closely knit.   

Organizational culture is not the sum of roles, processes and measurements; it is the sum of subconscious human behaviors that people repeat based on prior successes and collectively held beliefs. Similarly, security culture is not (just) related to "awareness" and "training"; it, too, is the sum of subconscious human behaviors that people repeat based on prior experiences and collectively held beliefs.

Culture is shared, learned and adaptive, but it can be influenced. It takes a group working collectivity and it begins with the leaders.

To impact change and behavior, you must be aware of, and work from within, the existing culture. Does your organization have a marketing organization that helps with internal communications? If so, understand how they leverage the communication methods, formats, and branding. It’s so important that *your* communications speak with the established voice/tone of the company so that you aren’t seen as un-connected and (worst of all) irrelevant. You also need to get an idea of where there divisional, departmental, and regional nuances. Work within the specific cultural frameworks within each of these segments. And, always be on the lookout for existing communication channels that you can plug-into (e.g. existing meetings, executive videos, etc.) so that you message is interwoven with the other company-centric messages.

3. Leverage behavior management principles to help shape good security hygiene

Let’s start by recognizing that just because you’re aware, doesn’t mean that you care!  

Security awareness and security behavior are not the same thing. Your security awareness program shouldn’t focus only on information delivery. There are plenty of things that people are aware of but may just not care about – we need to make people care. 

Because of this, if the underlying motivation for your program is to reduce the overall risk of human-related security incidents in your organization, you need to incorporate behavior management practices.

The idea is that we need to create engaging experiences for users to drive specific behaviors. (Check out BJ Fogg’s work for more on great examples of behavior model and habit creation).

An example of this would be simulated phishing platforms. These distill some of the fundamentals of behavior management into an easy to deploy platform that allows you to send simulated social engineering attacks to your users and then immediately initiate corrective and rehabilitative action if the user falls victim for the simulated attack. Do this frequently, and you will see dramatic behavior change. 

4. Be realistic about what is achievable in the short-term and optimistic about the long-term payoff

Be a realistic optimist within your organization. What can you impact today? Know your place and your scope of influence and remember that culture starts at the top.

Understand the foundation of your culture and then create a customized roadmap for security culture management. To do so, you must evaluate four areas: 

  • "How we make decisions" outlines the general leadership style and how this affects the outcomes of the organizational culture.
  • "How we engage" focuses on how people collaborate internally and with external stakeholders to deliver on their goals.
  • "How we measure" describes organizational performance metrics, and how they affect organizational achievements.
  • "How we work" defines the working style of teams, how solutions are created, and problems are solved, which affects organizational outcomes.

By understanding these four attributes of organizational culture, security leaders and corporate leaders can make informed choices when trying to change cultures and improve an organization’s overall defense.  

Here is where the rubber meets the road. You’ve got all of the planning out of the way, created SMARTER goals, understand the nuances of your organization, and are focusing on creating real, sustainable change. Now it’s time to get started and to commit to perseverance. Many aspects of your program will be spaced throughout the year, and so it is important to commit to being consistent with your efforts. The beginning is just that – the beginning. You are focusing on training an entire organization; and that sometimes means training people how to be trained.

About the author: Perry Carpenter is the Chief Evangelist and Strategy Officer for KnowBe4, the provider of the world’s most popular integrated new school security awareness training and simulated phishing platform.

Copyright 2010 Respective Author at Infosec Island]]>
The 3 Must Knows of Sandboxing Mon, 04 Jun 2018 05:48:53 -0500 Sandboxes have been touted as a high-ranking method to prevent a cyber-attack on organizations because they allow you to test everything before it can affect your production environment. But does that come with a cost and are they as effective as vendors would like us to believe?

Play Time in the Sandbox?

Most of us know a sandbox as a fun place that children play in at the playground. Similarly, for IT professionals, sandboxes have often been considered a safe place to develop and test code before it’s launched into production environments. For security professional though, sandboxing has been seen as a way to spot zero-day threats and stealthy attacks. However, as the “arms race” between invader and defender continues, malware authors have continuously found clever ways to evade sandbox detection.

Many IT security professionals and CISOs continue to rely too heavily on a sandboxing strategy alone to protect their resources. Meanwhile, the bullies of the cyber world are continuously finding new ways to “play” in the sandbox.

Myth vs. Reality

While sandboxes do provide a layer of prevention in your cyber threat prevention strategy, they come with a tax that may be too high for most organizations to pay. The three myths commonly associated with a sandbox technology for your cyber threat prevention strategy include:

Myth: Sandboxes are Fast

Reality: Sandboxes are slow: By definition of how sandboxes operate, all data that enters your operating system, network or application will need to pass through the sandbox and detonated to determine if any malware is hidden. This can add significant delays in communication, especially in organizations with tens of thousands to millions of emails and files transferred daily.

Myth: Sandboxes are Cost Effective

Reality: Sandboxes are resource intensive (read it’s expensive): The necessary hardware to create a secure sandbox is directly dependent on your application environment as you will have to duplicate every scenario in order to test for the possibility of a cyber breach. This can be expensive from a hardware and software perspective, but also the human resources necessary to keep those environments current with latest updates is also not insignificant.

Myth: Sandboxes Alone are Fool-Proof

Reality: Sandboxes can be spoofed: Sometimes a belief in a fool-proof method to prevent cyber-attacks are too good to be true. So much so that hackers even publish methods to crack sandbox vulnerabilities.                      

Today’s enterprise networks are no longer defined by its perimeters, with services that span public and private environments, diverse infrastructure underlays, and a growing number of application options and sources.

The Sandbox Alternative

Businesses truly looking to prevent –  and not remediate – cyber-attacks need to consider a platform with an evasion-proof approach that does not require sandboxing. By doing so, the customer will be empowered with the right degree of flexibility to deliver end-to-end security across a changing threat landscape.

Whether on-premise or in the cloud, the platform should operate consistently, totally separating environment variables from security logic. Similarly, the platform should be agnostic to the underlying infrastructure implemented and able to protect in hybrid environments – including a mix of virtual, hardware, and XaaS-consumed infrastructure. To provide true end-to-end security, the platform needs to provide customers the flexibility and consistency that is not restricted to a certain vertical.

While sandboxes do provide a layer of prevention in a cyber threat prevention strategy, they come with a tax that may be too high for most organizations to pay.

About the author: Boris Vaynberg co-founded Solebit LABS Ltd. in 2014 and serves as its Chief Executive Officer. Mr. Vaynberg has more than a decade of experience in leading large-scale cyber- and network security projects in the civilian and military intelligence sectors.

Copyright 2010 Respective Author at Infosec Island]]>
Valve Patches 10-Year Old Flaw in Steam Client Thu, 31 May 2018 11:42:32 -0500 A remote code execution (RCE) vulnerability that existed in the Steam client for at least 10 years was fully patched only in March this year, according to security firm Context Information Security.

In July last year, Valve added modern exploit protections (Address Space Layout Randomisation – ASLR) to the Steam client, thus partially patching the RCE. According to Context senior researcher Tom Court, exploitation following this patch would have simply crashed the client.

Before that, however, all of the 15 million active Steam clients were vulnerable to RCE, the researcher claims.

The flaw was essentially a remotely triggered heap corruption within the Steam client library. The bug resided in “an area of code that dealt with fragmented datagram reassembly from multiple received UDP packets,” Court explains.

The Steam client communicates using a custom protocol that uses UDP and the bug resulted from the lack of a check to ensure that “for the first packet of a fragmented datagram, the specified packet length was less than or equal to the total datagram length.” The check, however, was present for all subsequent packets carrying fragments of the datagram.

Because the steam client had a custom memory allocator and lacked ASLR on the steamclient.dll binary, the bug could have been abused for remote code execution.

An attacker looking to exploit the issue would first have had to learn the client/server IDs of the connection, along with a sequence number. Next, the attacker would have had to spoof the UDP packet source/destination IPs and ports, as well as IDs, and increment the observed sequence number by one.

Steam uses a custom memory allocator that divides the large blocks of memory requested from the system allocator and then performs sequential allocations with no metadata separating the in-use chunks. Each large block has its own freelist, implemented as a singly linked list, the researcher explains.

Depending on the size of the packets used to cause the corruption when the buffer overflow occurs in the heap, the allocation is controlled by either Windows or Steam, with the latter found to be much easier to exploit.

“Referring back to the section on memory management, it is known that the head of the freelist for blocks of a given size is stored as a member variable in the allocator class, and a pointer to the next free block in the list is stored as the first 4 bytes of each free block in the list,” the researcher explains.

The heap corruption allows an attacker to overwrite the next_free_block pointer if a free block exists next to the block where the overflow occurs. If the heap can be controlled, the attacker can set the overwritten next_free_block pointer to an address to write to, and all subsequent allocation will be written to this location.

Because packets are expected to be encrypted, “exploitation must be achieved before any decryption is performed on the incoming data,” Court says.

This is achievable by overwriting a pointer to a CWorkThreadPool object stored in a predictable location within the data section of the binary, which allows the attacker to fake a vtable pointer and associated vtable, thus gaining execution, and a ROP chain can be created to execute arbitrary code.

“This was a very simple bug, made relatively straightforward to exploit due to a lack of modern exploit protections. The vulnerable code was probably very old, but as it was otherwise in good working order, the developers likely saw no reason to go near it or update their build scripts,” the researcher notes.

Court also points out that developers should periodically review aging code to ensure they conform to modern security standards, even if they continue to function.

Valve was alerted on the bug on February 20 this year and addressed it in the beta branch in less than 12 hours, but the patch landed in the stable branch only on March 22.

Related: Vulnerability Allowed Hackers to Hijack Steam Accounts

Related: Details of 34,000 Steam Users Exposed During DDoS Attack

Copyright 2010 Respective Author at Infosec Island]]>
Infrastructure Under Attack Thu, 31 May 2018 01:25:23 -0500 What makes a DDoS attack different from an everyday data breach? The answer is embedded in the term: denial of service. The motive of a DDoS attack is to prevent the delivery of online services that people depend on. Financial institutions, gaming and e-commerce websites are among the top targets of DDoS attacks, as are cloud service providers that host sites or service applications for business customers. Even a brief disruption of service delivery can cost an enterprise millions in lost business, not counting the after-effects of alienated customers and reputational damage.

Because DDoS attacks and data breaches are so different in nature, conventional security infrastructure components used to combat breaches – perimeter firewalls, intrusion detection/preventions systems (IDI/IPS) and the like – are comparatively ineffective at mitigating DDoS attacks. These security products certainly have their place in a layered defense strategy, serving to protect data confidentiality and integrity. However, they fail to address the fundamental issue in DDoS attacks, namely network availability.

In fact, these components themselves are increasingly the target of DDoS attacks aimed at incapacitating them. The 13th annual Worldwide Infrastructure Security Report (WISR), NETSCOUT Arbor’s annual survey of security professionals in both the service provider and enterprise segments, uncovered a significant increase in DDoS attacks targeting infrastructure over the previous year. Among enterprise respondents, 61% had experienced attacks on network infrastructure, and 52% had firewalls or IPS devices fail or contribute to an outage during a DDoS attack. Attacks on infrastructure are less prevalent among service providers, whose customers are still the primary target of DDoS attacks. Nonetheless, 10% of attacks on service providers targeted network infrastructure and another 15% targeted service infrastructure.

Meanwhile, data center operators reported that 36% of inbound attacks targeted routers, firewalls, load balancers and other data center infrastructure. Some 48% of data center respondents experienced firewall, IDS/IPS device and load-balancer failure contributing to an outage during a DDoS attack, an increase from 43% in 2016.

Infrastructure components are particularly vulnerable to TCP State Exhaustion attacks, which attempt to consume the connection state tables (session records) used by load balancers, firewalls, IPS and application servers to identify legitimate packet traffic. Such attacks can take down even high-capacity devices capable of maintaining state on millions of connections. In the latest WISR, TCP State Exhaustion attacks accounted for nearly 12% of all attacks reported.

In spite of their vulnerability, firewalls, IPS and load-balancers remain at the top of the list of security measures organizations say they employ to mitigate DDoS attacks. Among service providers, firewalls were the second most reported DDoS mitigation option, while on the enterprise side, firewalls were the first choice of 82% of respondents. It is somewhat discouraging that some of the most popular DDoS mitigation measures are also the least effective, given the ease with which a state-based attack can overwhelm them.

On a positive note, however, the increased frequency of DDoS attacks reported in our 2016 survey appears to have driven wider adoption of Intelligent DDoS Mitigation Systems (IDMS) in 2017. About half of respondents indicated that an IDMS was now a part of perimeter protection, a sharp increase from the previous year’s 29%.

Any organization that delivers services over the web needs strong, purpose-built DDoS protection. Security experts continue to recommend as best practice a hybrid solution combining on-premise defenses and cloud-based mitigation capabilities. Specifically with regard to attacks on network infrastructure, a dedicated DDoS on-premise appliance should be deployed in front of infrastructure components to protect them from attacks and enable them to do their job unimpeded.

About the author: Tom has worked in the network and security industries for more than 20 years. During this time, he has served as a Network Engineer for large enterprises and has had roles in Sales Engineering /Management, Technical Field Marketing and Product Management at multiple network management and security vendors. Currently, as Director of DDoS Product Marketing at NETSCOUT Arbor he focuses on Arbor’s industry leading DDoS Protection Solutions.

Copyright 2010 Respective Author at Infosec Island]]>
How to Prevent Cloud Configuration Errors Tue, 29 May 2018 23:52:47 -0500 The advent of cloud computing has dramatically altered the technology structure of today’s companies – making it much easier and faster to deploy resources as needed. In the traditional model, application developers had to wait for IT to provision storage and compute resources; meanwhile, security and network teams were needed to make the resources accessible and compliant with company policies. The process often took weeks or even months. By contrast, cloud-based resources could be spun up in minutes, and new applications deployed in that same day, without IT or network security involvement.

This newfound business agility introduces a new layer of risk to a company’s environment, as resource misconfiguration may not be discovered until it’s too late. The problem is further compounded by the lack of consistent security controls across competing cloud implementations.

Agility without security will eventually harm your business, as demonstrated by frequent news articles describing new cloud-based application and data breaches.

Misconfigurations Can Cause Security Breaches

When deployments or implementations aren’t configured properly, an easy opening is created for cybercriminals to gain access to sensitive data and resources. In fact, the wave of security breaches or instances of exposed customer data are often traced directly back to configuration errors.

Until organizations fully recognize the extent of the problem – and start proactively taking steps to both identify (and fix) possible vulnerabilities in their own networks, storage systems and user behaviors – the breaches will continue to grow in frequency.

Full Unified Visibility is Key

There is a way, however, to prevent these misconfigurations from happening. The key is visibility. Unified organizational visibility will let IT managers see any potential issues and fix them before they’re a problem. The ability to understand your company’s end-to-end network connectivity across all different architectures is critically important.

IT managers need to be able to see and understand their entire network - physical, virtual, and cloud. They need to be able to quickly answer the following questions, to make sure their implementations are configured correctly – and their organization is protected:

  • What is being deployed, where, and how?
  • How many different implementations are running at the same time?
  • Are they “talking to each other” properly?
  • Are all aspects of the organization configured consistently with security protocols?
  • Has cloud storage been configured and secured correctly?
  • Are end-users complying with policies?

Early visibility into configuration errors enables developers and IT to remediate issues and avoid public exposure.

The Need for Automation

While visibility is a good starting point, modern DevOps practices are driving ever faster change cycles. Managing security at scale is not easy, and the adoption of cloud and container technology is exponentially increasing scale. The traditional security management approach and tools are simply not capable of addressing the new challenges.

IT managers don’t always have the luxury of having a large team – and often, despite company growth, the IT team is one of the last things to grow accordingly. This means that IT managers must find solutions to the age-old problem of doing a lot with a little. Automation is that solution.

By embracing automation, IT managers can remove some of the lower-level, more repetitive tasks from their responsibilities. By setting up automated tools to integrate new implementations with their existing network, time can be freed up for IT managers. Automated tools can also immediately locate and correct implementation errors or flag potential security issues, helping to ensure that your company’s data won’t be compromised.


With unified visibility into the whole network, at all levels – and an embracing of automation – IT managers will be able to strike a balance between security and functionality, without worrying about an implementation error creating a lasting problem.

About the author: Reuven Harrison is CTO and Co-Founder of Tufin. He led all development efforts during the company’s initial fast-paced growth period, and is focused on Tufin’s product leadership. Reuven is responsible for the company’s future vision, product innovation and market strategy. Under Reuven’s leadership, Tufin’s products have received numerous technology awards and wide industry recognition.

Copyright 2010 Respective Author at Infosec Island]]>
SOC Automation: Good or Evil? Thu, 24 May 2018 02:26:00 -0500 Many security operations centers (SOCs) face the same recurring problem — too many alerts and too few people to handle them. Over time, the problem worsens because the number of devices generating alerts increases at a much faster rate than the number of people available to analyze them. Consequently, alerts that truly matter can get buried in the noise.

Most companies look at this problem and see only two solutions:  decrease the number of alerts, or increase the number of staff. Luckily, there’s a third option: automation, which can greatly maximize the efficiency of analysts’ time

Traditionally, automation has been viewed as an all-or-nothing proposition. But, times change. Companies can implement automation at various points of the incident response process to free analysts from mundane, repetitive tasks, while maintaining human control over how they monitor and react to alerts. Ultimately, the goal should be to strike a balance between low-risk processes that can be automated with minimal impact and the higher-risk ones that need to be handled by analysts.

Before launching into some level of SOC automation, the following should be considered: 1) Is the organization winning or losing the cyber battle?; 2) if it is winning, does it have the right tools to continue doing so?; and 3) if its is losing: what should it do?

Whether an organization is winning or losing, understanding the pros and cons of automation is critical to any project’s success.

Benefits of Automation

Automation has typically been favored in low-impact environments, but it has been frowned upon in high-impact environments such as utility and healthcare because of the negative impact false positives can cause.

The main benefits of SOC automation include:

  • More consistent response to alerts and tickets
  • Higher volume of ticket closure and response to incidents
  • Better focus by analysts on higher priority items
  • Improved visibility into what is happening
  • Coverage of a larger area and a larger number of tickets

Downsides of Automation

Nothing is more taxing than dealing with a false positive, which happens when a system interprets legitimate activity and flags it as an attack. In some industries, a false positive can disrupt business processes resulting in lost revenue, downtime for industrial organizations and even put lives at risk in hospital settings.

Major downsides include:

  • Shutting down operations
  • Misclassifying an attack so the wrong action is taken
  • Automating tickets that should have been handled manually
  • Missing key information or data
  • Making the wrong or inappropriate decision

Best Practices for Automation

In the past, companies typically looked at automation’s potential downsides and then decided to avoid it because doing so seemed safer. However, today, more companies are realizing that if they do not implement some degree of automation, they increase their chances of missing an attack, which could cause more damage than the negative effects of automation.

Given this scenario, security practitioners should look at adopting the following best practices for automation.

Create a Thorough Strategy

The plan should address the following key questions:

  • What areas generate the most alerts?
  • What alerts take up most of the analysts’ time?
  • Which responses are very structured and which ones do the analysts respond to in a predictable way?
  • Can an automated playbook be used to handle certain events?

Take a Measured Approach

One of the key rules of security is to always avoid extremes. For example, automating everything can open a can of worms — forcing security executives to justify the approach by claiming analysts could not keep up with the tickets.

Finding a balance by automating tasks/tickets that are manually intensive, are highly repeatable, and distract analysts from important  functions -- is a good starting point. Automation should allow the company to improve SOC efficiency while maintaining acceptable levels of risk — both on the operational side and the security side.

The trick is to manage and control false positives, not eliminate them.

Know, and Don’t Automate, Tasks that Require Human Analysis

These include alerts that affect:

  • Critical applications or systems
  • Business process, financial and operational systems
  • Systems that contain large amounts of sensitive data
  • Large-scale compromise indicators


The need for SOC automation is increasing in urgency since adversaries are also harnessing software and hardware to develop and carry out attacks. Consequently, the velocity and sophistication of threats is rising. Keeping pace with programmatic attacks inevitably requires automating certain SOC functions and processes. Following the recommendations outlined above can help determine those that should be automated, and those that shouldn't.

About the author: John Moran is Senior Product Manager for DFLabs and a security operations and incident response expert. He has served as a senior incident response analyst for NTT Security, computer forensic analyst for the Maine State Police Computer Crimes Unit and computer forensics task force officer for the US Department of Homeland Security. John currently holds GCFA, CFCE, EnCE, CEH, CHFI, CCLO, CCPA, A+, Net+, and Security+ certifications.

Copyright 2010 Respective Author at Infosec Island]]>
Can Organisations Turn Back Time after a Cyber-Attack? Wed, 23 May 2018 07:22:00 -0500 In the aftermath of a cyber breach, the costs of disruption, downtime and recovery can soon escalate. As we have seen from recent high profile attacks, these costs can have a serious impact on an organisation’s bottom line. Last year, in the wake of the notPetya attack, Maersk, Reckitt Benckiser and FedEx all had to issue warnings that the attacks had cost each company hundreds of millions of dollars. Whilst the full extent is not yet known, it has underlined the financial impact that such breaches can have.   

The severity of a breach is often linked to the costs associated with responding and remediating the damage. However, there are ways for organisations to minimise one particularly costly part of the process: new approaches to post breach remediation mean that organisations can, in effect, roll back time to a ‘pre-breach’ state.

The costs of a breach

Cyber attacks can cripple a business and take days to clear up. For larger organisations that are affected by an incident, the cost of remediation could include damage to the brand’s reputation, legal costs, setting up response mechanisms to contact breach victims, and more. For smaller organisations, even though the costs of remediation might be smaller, they’ll take up a greater proportion of their operating revenue; from lost data to damaged or inoperable equipment, as well as the disruption to normal business. There is also the cost of any fines that are generated because of failures in compliance. In fact, Ponemon now puts the average cost of a breach at $3.62 million. 

This clean-up operation can represent a serious drain on an organisation’s time and resources. The process of repairing and recovering data from compromised IT assets is consistently reported as one of the most high-cost elements of the breach. Ransomware attacks, in particular, are likely to become more difficult to remediate, by targeting systems that are more difficult to backup, which means that the costs of cleaning-up after a breach are set to get worse. Paying the ransom is no guarantee that files will be recovered: in fact 20% of ransomware victims that paid never get their files back.    

Part of the challenge is that cyber attacks are getting smarter and stealthier, and stopping every cyber attack in its tracks, before it reaches the network and can inflict any damage, is unrealistic. What organisations should aim for is, in all cases, to identify the virus as quickly as possible, halt the executable, and isolate the infected endpoint from the network. During execution, malware often creates, modifies or deletes system files and registry settings, as well as making changes to configuration settings. These changes – or remnants left behind – can cause system malfunction or instability.  

For organisations that are dealing with hundreds of incidents every week, there can be a serious impact to the business from working to re-image or re-build systems, or reinstall files that have been affected. There’s not only the lost work to factor in, but also the downtime while systems are restored as employees are stymied if they can’t access the files and systems they need to.

There are approaches through which these costs can be minimised: a new generation of endpoint protection observes the malware’s behaviour in order to flag activities that are seen as abnormalities and steps in the line of execution to deflect it completely.  Moreover, this new generation of solutions has remediation capabilities to reverse any modifications made by malware.

This means that when files are modified or deleted, or where changes are made to configuration settings or systems, it can undo damage without teams having to re-image systems. This ability to automatically rollback compromised systems to their pre-attack state minimises any downtime and lost productivity.

Assessing the Impact

The work isn’t done yet: an often-overlooked aspect of post-event evaluation of what happened should focus on how to prevent a repetition of a similar incident. Clear visibility of the kill chain and the affected endpoints across an organisation, in a timely manner, is essential for security staff to quickly identify the scope of the problem. In order to assess the impact and potential risk, organisations need to have assurance afterwards to confirm if a particular threat was present on their estate – the ability to search for Indicators of Compromise (IoC) is vital. Real-time forensic data allows organisations to track threats or investigate post-attack to provide insights into exactly which vulnerability the attacker targeted, and how. These can pinpoint the parts of the system that were directly affected and also determine if any further remediation actions are required. 

With the costs of breaches escalating, it’s more important than ever to have the capability to learn from incidents to avoid history repeating itself. Even if it’s not possible to thwart every attack, a full security approach which includes prevention, detection, automatic mitigation and forensics will ensure that the impact of any incident is minimised and that normal operations can be resumed as quickly as possible.  

About the author: Patrice Puichaud is Senior Director for the EMEA region, at SentinelOne.

Copyright 2010 Respective Author at Infosec Island]]>
The AWS Bucket List for Security Wed, 23 May 2018 06:22:39 -0500 With organizations having a seemingly insatiable appetitefor the agility, scalability and flexibility offered by the cloud, it’s little surprise that one of the market’s largest providers, Amazon’s AWS, continues to go from strength to strength. In its latest earnings report, AWS reported a 45% revenue growth during Q4 2017.

However, AWS has also been in the news recently for the wrong reasons, following a number of breaches of its S3 data object storage service. Over the past 18 months, companies including Uber, Verizon, and Dow Jones have had large volumes of data exposed via misconfigured S3 buckets. Between them, the firms inadvertently made public the digital identities of hundreds of millions of people.

Sub-par security practices

It’s important to note that these potential breaches were not caused by problems at Amazon itself. Instead, they were the result of users misconfiguring the Amazon S3 service, and failing to ensure proper controls were set-up when uploading sensitive data to it.  In effect, data was placed in S3 buckets and secured with a weak password – or in some cases, no password at all.  

Amazon has made several tools available to make it easier for S3 customers to work out who can access their data, and to help secure it. However, organizations still need to use access controls for S3 that go beyond just passwords, such as two factor authentication, to control who can login to their S3 administration console.

But to understand why these basic mistakes are still being made by so many organizations, we need to look at the problem in the wider context of public cloud adoption in many enterprises. When speaking with IT managers that are putting data in the cloud, it is not uncommon to hear statements such as ‘there is no difference between on-premise and cloud servers.’ In other words, all servers are seen as being part of the enterprise IT infrastructure: and they will use whichever environment best suits their needs, operationally and financially.

Old habits die hard

However, that statement overlooks one critical point: cloud servers are much more exposed than physical, on-premise servers. For example, if you make a mistake when configuring the security for an on-premise server storing sensitive data, it is still protected by other security measures by default. The server’s IP address is likely to be protected by the corporate gateway, or other firewalls used to segment the network internally, and other security layers which stand in the way of potential attackers.

In contrast, when you provision a server up in the public cloud, it is accessible to any computer in the world. By default anybody can ping it, try to connect and send packets to it, or try to browse it. Beyond a password, it doesn’t have all those extra protections from its environment that an on-premise server has. And this means you must put controls in place to change that.

These are not issues that the organization’s IT teams, who have become comfortable with having all those extra safeguards of the on-premise network in place, have to regularly think about when provisioning severs in the data centre. There is often an assumption that something or someone will secure the server – and this carries over when putting servers in the cloud.

So when utilizing the cloud, security teams need to step in and establish a perimeter, define policies, implement controls, and put in governance to ensure their data and servers are secured and managed effectively – just as they do with their on-premise network.  

Security 101 for cloud data

This means you will still need to apply all the basics of on-premise network security when utilizing the public cloud: access controls defined by administration rights or access requirements and governed by passwords; filtering capabilities defined by which IP addresses need connectivity to and from one another.

You still need to consider if you should use data encryption, and whether you should segment the AWS environment into multiple virtual private clouds (VPC). Then you will need to define which VPCs can communicate with each other, and place VPC gateways accordingly with access controls in the form of security groups to manage and secure connectivity.

You will also need controls over how to connect your AWS and on-premise environments, for example using a VPN. This requires a logging infrastructure to record actions for forensics and audits, to get a trail of who did what. None of these techniques are new, but they all have to be applied correctly to the AWS deployment, to ensure it can function as expected.

Extending network security to the cloud

In addition to these security basics, IT teams also need to look at how they should extend network security to the cloud. While some security functionality is built into cloud infrastructures, it is less sophisticated than the security offerings from specialist vendors.

As such, organizations that want to use the cloud to store and process sensitive information are well advised to augment the security functionality offered by AWS with virtualized security solutions, which can be deployed within the AWS environment to bring the level of protection closer to what they are used to within on-premise environments.  

Many firewall vendors sell virtualized versions of their products customized for Amazon. While these come at a cost, if you want to be serious about security, you need more than the measures that come as part of the AWS service. Ultimately you need to deploy additional web application firewalls, network firewalls and implement encryption capabilities to mitigate your risks of being attacked and data being breached.

This has the potential to add overall complexity to the security management. However using a security policy management solution will greatly simplify this, enabling security teams to have visibility of their entire estate and enforce policies consistently across both AWS and the on-premise data centre while providing a full audit trail of every change.  

About the author: Professor Avishai Wool is co-founder and CTO at AlgoSec.

Copyright 2010 Respective Author at Infosec Island]]>
Achieving Effective Application Security in a Cloud Generation Wed, 16 May 2018 02:04:05 -0500 Today’s modern applications are designed for scale and performance. To achieve this performance, many of these deployments are hosted on public cloud platforms like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) for their benefit of elasticity and speed of deployment. The challenge is that effectively securing cloud hosted applications to date has been difficult. There are many high-profile security events involving successful attacks on cloud-hosted applications in the media, and these are only the examples that were disclosed to the public.

In reality, traditional security deployment patterns do not work effectively with applications hosted on public cloud platforms. Organizations should not try to push their previous on-premises application security deployments into cloud environments for several reasons. 

Cloud application security requires new approaches, policies, configurations, and strategies that both allow organizations to address business needs and security risks in unison. Not incorporating these will no doubt deliver an insufficient security posture and cost unnecessary time and money. 

The balance of performance and security  

Whether your organization is a one-person startup, a global enterprise, or anything in between, you depend on applications to operate effectively. You cannot afford down time with these applications, and for many the cloud is still a confusing space when it comes to who is responsible for security. Unfortunately, a single unpatched vulnerability in an application can let an attacker penetrate your network, steal or compromise your data along with that of your customers—causing significant disruption to your operations. According to a recent report, “Unlocking the Public Cloud,”74 percent of respondents stated that security concerns restrict their organization’s migration to public cloud. Public cloud adoption is rapidly growing, yet security is the largest area of resistance when moving to the cloud. 

Many organizations still rank performance well over security, but they should be in a balance with equal importance given the risks. For example, in a May 2018 report from Ponemon Institute, 48 percent of the 1,400 IT professionals who responded said they value application performance and speed over security.

While deploying layer 7 protections is extremely paramount to securing applications, it’s also essential that any security technology integrates deeply with existing cloud platforms and licensing models.

Security measures should be deeply coupled with the dynamic scalability of public cloud providers such as AWS, Azure and GCP, ensuring that performance handling requirements are addressed in real-time without any manual interventions. Also, organizations should direct access to the native logging and reporting features available to cloud platforms.

Fixing application vulnerabilities in the cloud

You wouldn’t necessarily think this, but application vulnerabilities are pervasive and often untouched until it is too late. Unfortunately, fixes or patches are a reactive process that leaves vulnerabilities exposed for far too long (months isn’t uncommon). The problem is clear and vulnerability remediation on an automated and continuous basis is paramount in ensuring application security both on-premise and in the cloud.

In reference to the Ponemon research, 75 percent of organizations experienced a material cyber-attack or data breach within the last year due to a compromised application. Interestingly, only 25 percent of these IT professionals say their organization is making a significant investment in solutions to prevent application attacks despite the awareness of the negative impact of malicious activity.

Because of frightening statistics like these, it is essential to implement a set of policies that provide continued protection of applications with regular vulnerability management and remediation practices, which can even be automated to ensure that application changes don’t open up vulnerabilities.  

Security aligned with the cloud

Here are some best practices for effective application security in a cloud generation:

  1. Application security must provide the ability to satisfy the most demanding use-cases specific to cloud hosted applications. Also, do this without carrying the management overhead of your legacy on-premises architectures.
  2. Fully featured API that provides complete control via orchestration tools already used by DevOps teams.
  3. Security needs to be deployable in high-availability clusters and auto-scaled with the use of cloud templates. Also, they should be managed and monitored from a single pane of glass user interface.
  4. It is imperative they integrate directly with native public cloud services including Elastic Load Balancing, AWS CloudWatch, Azure ExpressRoute, Azure OMS and more.
  5. It is essential security technologies provide complete licensing flexibility including pure consumption-based billing. This allows you to deploy as many instances as needed and only pay for the traffic that is secured through those applications.

Basically, securing applications effectively in the cloud means adopting new ways of thinking about security, and it is critical to look at the security technology stack you have deployed today. Assess what is lacking and adopt what is required for regular monitoring and vulnerability remediation on those applications. It is key to focus on protecting each application with the right level of security. This means deploying security that is aligned with your current cloud consumption and leveraging tools designed for those cloud environments that allow you to build security controls.

About the author: Jonathan Bregman has global responsibility for leading Barracuda's web application security product marketing strategy. He joins Barracuda from Seattle, WA where he worked with Microsoft, Amazon and their ISV partners to build innovative marketing programs focused on driving awareness and demand for emerging products in enterprise software, cloud services and cybersecurity.

Copyright 2010 Respective Author at Infosec Island]]>
Understanding the Role of Multi-Stage Detection in a Layered Defense Tue, 08 May 2018 03:12:25 -0500 The cybersecurity landscape has changed dramatically during the past decade, with threat actors constantly changing tactics to breach businesses’ perimeter defenses, cause data breaches, or spread malware. New threats, new tools, and new techniques are regularly chained together to pull off advanced and sophisticated attacks that span across multiple deployment stages, in an effort to be as stealthy, as pervasive, and as effective as possible without triggering any alarm bells from traditional security solutions.

Security solutions have also evolved, encompassing multi-stage and multi-layered defensive technologies aimed at covering all potential attack vectors and detecting threats at pre-execution, on-execution, or even throughout execution.

Multi-Stage Detection

All malware is basically code that’s stored (on disk or in memory) and executed, just like any other application. Delivered as a file or binary, security technologies refer to these states of malware detection as pre-execution and on-execution. Basically, it boils down to detecting malware before, or after, it gets executed on the victim’s endpoint.

Layered security solutions often cover these detection stages with multiple security technologies specifically designed to detect and prevent zero-day threats, APTs, fileless attacks and obfuscated malware from reaching or executing on the endpoint.

For example, pre-execution detection technologies often include signatures and file fingerprints matched against cloud lookups (local and cloud-based machine learning models aimed at ascertaining the likelihood that an unknown file is malicious based on similarity to known malicious files), as well as hyper detection technologies, which are basically machine learning algorithms on steroids.

It helps to think that hyper detection technologies are basically paranoid machine learning algorithms for detecting advanced and sophisticated threats at pre-execution, without taking any chances. This is particularly useful for organizations in detecting potentially advanced attacks, as it can inspect and detect malicious commands and scripts - including VB scripts, JAVA scripts, PowerShell scripts, and WMI scripts – that are usually associated with sophisticated fileless attacks.

On-execution security technologies sometimes involve detonating the binary inside a sandboxed environment, letting it execute for a specific amount of time, then analyzing all system changes the binary made, the internet connections it attempted, and pretty much inspect any changes and behavior the binary had on the system after it was executed. A sandbox analyzer is highly effective as there’s no risk of infecting a production endpoint and the security tools used to analyze the binary can be set to a highly paranoid mode. The trade-off is that this would typically cause performance penalties on a production endpoint, and even risk compromising the organization’s network should the threat actually breach containment.

Of course, there are on-execution technologies that are deployed on endpoints to specifically detect and prevent exploits from occurring or for monitoring the behavior of running applications and processes throughout their entire lifetime. These technologies are designed to constantly assess the security status of all running applications, and prevent any malicious behavior from compromising the endpoint.

Layered Security Defenses

Multi-stage detection using layered security technologies gives security teams the unique ability to stop the attack kill chain at almost any stage of attack, regardless of the threat’s complexity. For instance, while a tampered document that contains a malicious Visual Basic script might bypass an email filtering solution, it will definitely be picked up by a sandbox analyzer technology as soon as the script starts to execute malicious instructions or commands, or starts to connect to and download additional components on the endpoint.

It’s important to understand that the increased sophistication of threats requires security technologies capable of covering multiple stages of attack, creating a security mesh that acts as a safety net to protect your infrastructure and data. However, it’s equally important that all these security layers be managed from a centralized console that offers a single pane of glass visibility into the overall security posture of the organization. This makes managing security aspects less cumbersome, while also helping security and IT teams focus on implementing prevention measures rather than fighting alert fatigue.

About the author: Liviu Arsene is a Senior E-Threat analyst for Bitdefender, with a strong background in security and technology. Reporting on global trends and developments in computer security, he writes about malware outbreaks and security incidents while coordinating with technical and research departments.

Copyright 2010 Respective Author at Infosec Island]]>
VirusTotal Browser Extension Now Firefox Quantum-Compatible Sat, 05 May 2018 10:23:52 -0500 VirusTotal released an updated VTZilla browser extension this week to offer support for Firefox Quantum, the new and improved Web browser from Mozilla.

The browser extension was designed with a simple goal in mind: allow users to send files to scan by adding an option in the Download window and to submit URLs via an input box.

The VTZilla extension already proved highly popular among users, but version 1.0, which had not received an update since 2012, no longer worked with Mozilla’s browser after Firefox Quantum discontinued support for old extensions.

Starting toward the end of last year, Mozilla required all developers to update their browser extensions to WebExtensions APIs, a new standard in browser extensions, and VirusTotal is now complying with the requirement.

The newly released VTZilla version 2.0 builds on the success of the previous version and brings along increased ease-of-use, more customization options, and transparency.

Once the updated browser extension has been installed, the VirusTotal icon appears in the Firefox Quantum’s toolbar, allowing quick access to various configuration options.

Clicking on the icon enables users to customize how files and URLs are sent to VirusTotal, as well as to choose a level of contribution to the security community they want.

“Users can then navigate as usual. When the extension detects a download it will show a bubble where you can see the upload progress and the links to file or URL reports,” VirusTotal’s Camilo Benito explains.

“These reports will help users to determine if the file or URL in use is safe, allowing them to complement their risk assessment of the resource,” Benito continues.

Previously, only the pertinent URL tied to the file download was scanned, and access to the file report was available only via the URL report and only if VirusTotal servers had been able to download the pertinent file.

VTZilla also allows users to send any other URL or hash to VirusTotal and other features are only one right-click away.

VirusTotal is also determined to improve the extension and add functionality to it and is also open to feedback and suggestions. The Google-owned service can now make the extension compatible with other browsers that support the WebExtensions standard as well.

The extension revamp will soon be followed by VTZilla features that should allow users further help the security industry fight against malware. “Even non-techies will be able to contribute,” Benito says.

Related: VirusTotal Launches New Android Sandbox

Related: VirusTotal Launches Visualization Tool

Copyright 2010 Respective Author at Infosec Island]]>
PyRoMine Malware Sets Security Industry on Fire Thu, 03 May 2018 09:50:58 -0500 It’s happened once again...

Recent headlines heralded the latest in cryptomining hacks to leverage stolen NSA exploits. This time in the form of PyRoMine, a Python-based malware which uses an NSA exploit to spread to Windows machines while also disabling security software and allowing the exfiltration of unencrypted data. By also configuring the Windows Remote Management Service, the machine becomes susceptible to future attacks.

Despite all the investments in cyber protection and prevention technology, it seems that the cyber terrorist’s best tool is nothing more than variations on previous exploits because most security products really can’t accommodate every variation of zero-day malware detection in order to prevent the ensuing damage.

Cryptomining Beats Out Ransomware

Ransomware was the threat that wreaked havoc across organizations for years and sent most IT Security professionals into a panic at the mere mention of a new exploit hitting the headlines. However, now it seems that Ransomware is taking a back seat to CryptoMiners. According to a recent article at by Jon Martindale titled “Cryptojacking is the new ransomware. Is that a good thing?”

“In our history of malware feature, we looked at how malware tends to come in waves. While the latest and most dangerous in recent memory has been ransomware, it’s been pushed far from the top spot of common attacks in recent months by the advent of cryptominers, which look to force infected systems to mine cryptocurrency directly.”

The article goes further with this quote from a Senior E-Threat analyst on the expected growth of this type of threat:

“Since cybercriminals are always financially motivated, cryptojacking is yet another method for them to generate revenue,” said Liviu Arsene, senior E-Threat analyst at BitDefender. “Currently, it’s outpacing ransomware reports by a factor of 1 to 100, and these numbers will continue to increase for as long as virtual currencies remain popular and the market demands it.”

Variations on Old Hacks

Everything old is new again, or so goes an old adage, and it seems to apply to cyber threats as well. Fortinet researchers spotted a malware dubbed ‘PyRoMine’ which uses the ETERNALROMANCE exploit to spread to vulnerable Windows machines, according to an April 24 blog post.

“This malware is a real threat as it not only uses the machine for cryptocurrency mining, but it also opens the machine for possible future attacks since it starts RDP services and disables security services," the blog said. "FortiGuardLabs is expecting that commodity malware will continue to use the NSA exploits to accelerate its ability to target vulnerable systems and to earn more profit.”

The malware isn't the first to mine cryptocurrency that uses previously leaked NSA exploits the malware is still a threat as it leaves machines vulnerable to future attacks because it starts RDP services and disables security services.

The odds are great that we will see other variations on this NSA exploit before the year is up. Now is clearly the time to start evaluating other technologies that take more preventative steps to protect your IT infrastructure.

About the author: Boris Vaynberg co-founded Solebit LABS Ltd. in 2014 and serves as its Chief Executive Officer. Mr. Vaynberg has more than a decade of experience in leading large-scale cyber- and network security projects in the civilian and military intelligence sectors.

Copyright 2010 Respective Author at Infosec Island]]>
GDPR Is Coming. Is Your Organization Ready? Tue, 01 May 2018 06:15:00 -0500 On May 25th of 2018, the General Data Protection Regulation (GDPR) goes into effect. This is a law passed in 2016 by the member states of the European Union that requires compliance with regard to how organizations store and process the personal data of individual residents of the EU. Now maybe you are thinking that this regulation does not apply to your organization because it is not based in the EU. Don’t stop reading just yet.

This regulation applies to any organization that offers goods or services to EU residents and/or processes the personal information of EU residents, regardless of whether the organization is based in the EU or not. And the law does not apply only to the huge multinational companies of the world. It applies to small businesses as well. For example, consider an e-commerce business that sells Tshirts online, and it sells to people in the EU. Or perhaps an email marketing company that sends out periodic emails to EU citizens. Or even a message board website that allows users to create profiles and gathers personal information during the registration process. The GDPR would apply to all these businesses, no matter how big or small.

This regulation is the biggest change to the protection of individual personal data in over twenty years and is far reaching in its scope. It is important to understand if and how it applies to your organization.

What Type Of Data Is Protected?

The GDPR is meant to protect the personal data and fundamental rights and freedoms of natural persons in the EU. It does this by requiring organizations to implement strict policies, procedures and technical controls when processing the personal data of EU citizens. The regulation defines the term “personal data” very broadly. According to the regulation, personal data means “any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.” Examples of personal data would include name, email address, IP address, physical address, photos, gender, health information and national identification number.

The term processing is also defined very broadly. According to the GDPR, processing means “any operation or set of operations which is performed on personal data or on sets of personal data, whether or not by automated means, such as collection, recording, organization, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure or destruction.” Examples of processing would including simple storage of the data, sending out marketing emails, collecting personal data when a visitor places and order, processing a credit card transaction, and any other type of storage, processing or manipulation of personal data that occurs during the normal course of business.

Finally, the regulation applies to both the automated processing of data as well as the processing of data by non-automated means. In short, the regulation applies to both digital and non-digital forms of data. Examples of non-digital forms of data would include hard copies of contracts, health records, marketing information and any other type of medium containing the personal data of EU citizens.

Which Organizations Are Affected?

According to Article 3 of the GDPR, the regulation “applies to the processing of personal data in the context of the activities of an establishment of a controller or a processor in the Union, regardless of whether the processing takes place in the Union or not.” Furthermore, it applies “to the processing of personal data of data subjects who are in the Union by a controller or processor not established in the Union, where the processing activities are related to: 1) the offering of goods or services, irrespective of whether a payment of the data subject is required, to such data subjects in the Union; or 2) the monitoring of their behaviour as far as their behaviour takes place within the Union.” Finally, the regulation states that it “applies to the processing of personal data by a controller not established in the Union, but in a place where Member State law applies by virtue of public international law.”

So what does all this mean? First, if your organization collects personal data or behavioral information from someone residing in an EU country when the data was collected, your company is subject to the requirements of the GDPR, regardless of whether or not your organization is based in the EU, or even has a presence in the EU. Second, the law does not require that a financial transaction has to to take place for the scope of the law to kick in. If an organization simply collects the personal data of EU persons, then the requirements of the GDPR apply to the organization, even if the organization is based outside the EU. In sum, if your organization sells or markets goods or services to EU countries, or if your organization collects the personal data of people living in the EU, then the GDPR applies to your organization regardless of whether the organization has a presence in the EU or not.

What Are the Requirements?

The overarching goal of the GDPR is the protection of the personal data of EU citizens. As such, the GDPR requires that organizations take measures to ensure that they are implementing policies and controls that will reduce the risk of potential data breaches and will also provide transparency to the data subjects. Below is a list of the most prominent provisions of the GDPR:

  • Lawful Basis for Processing – Before an organization can begin processing the personal data of EU citizens, it must first determine if it has a lawful basis to do so. The GDPR outlines six reasons for lawfully processing personal data such as legal obligations, contracts or vital interests. The most common lawful basis that most businesses will rely on is consent from the data subject. The manner for obtaining consent must be clear, concise and transparent. It also must require subjects to explicitly opt-in, not opt-in by default. It is extremely important for each organization to determine the basis on which it may lawfully process the personal data of the subjects.
  • Privacy and Security – Organizations that collect the personal data of EU citizens may only store and process data when it’s absolutely necessary. Data protection and privacy must be integrated into an organizations data processing activities (privacy by design). Furthermore, organizations must provide protection against unauthorized or unlawful processing and against accidental loss, destruction or damage. It requires that appropriate technical and/or organizational measures are used including a method to anonymize data so that it cannot be tied back to a specific individual (e.g. data encryption). Organizations must also perform a data protection impact assessment (DPIA) for certain types of processing that is likely to result in a high risk to individuals’ interests. Finally, depending on the scale of personal information an organization processes, a data protection officer (DPO) must be assigned within the organization to ensure compliance with the GDPR.
  • Individual Rights – Data subjects have a number of individual rights according to the GDPR. Mostly importantly, individuals have the right to be informed about the collection and use of their personal data. This includes informing them of the reason for processing their data, the retention policy for storing the data, and who it will be shared with. Organizations must provide an individual residing in the EU with access to the personal data gathered about them upon request. Data subjects have the right to request that their data be erased (known as the “right to be forgotten”). Organizations have one month to respond to such requests. Finally, organizations must provide a way for individuals to transmit or move data collected on them from one data collector or data processor to another.
  • Breach Notification – The GDPR requires organizations to report data breaches to the relevant supervisory authority within 72 hours of becoming aware of the breach. If the breach is likely to result in a high risk of adversely affecting individuals’ rights and freedoms, the organization must also inform those individuals of the breach “without undue delay”. As a result of the requirement, organizations will need to ensure that they have a robust breach detection, investigation and internal reporting procedure in place. Finally, organizations must keep a record of all data breaches regardless of whether or not notification of any particular breach is required.
  • Minors – Children are provided additional protections under the GDPR and organizations that collect the personal data of minors must take special care when doing so. When offering an online service directly to a child, only children aged 13 or over are able provide their own consent. For children under age 13, an organization must also obtain the consent the child’s parent or legal guardian. Children merit specific protection when an organization uses their personal data for marketing purposes or creating personality or user profiles. Organizations must write clear privacy notices for children so that they are able to understand what will happen to their personal data, and what rights they have.

What Are the Penalties for Noncompliance?

The fines associated with noncompliance with the GDPR can be quite substantial. The regulation has a two tired system for determining fines based on the severity of the infraction(s). Before assessing fines the supervisory authority may take into account the nature, gravity and duration of the infringement. They may also determine if an organization was willfully negligent. Cooperation with the supervisory authority may also be taken into account when assessing fines. Below are the guidelines stated in the GDPR with regards to the assessment of financial penalties for noncompliance:

  1. Infringements that may be subject to administrative fines of up to 10,000.000 EUR or 2% of the total worldwide annual turnover of the preceding financial year, whichever is higher:
    • Violations of the provisions regarding data security obligations and privacy-by-default measures that need to be taken to protect data from unauthorized access
    • Not having an assigned DPO or the DPO not fulfilling her obligations
    • Violations of the DPIA requirement
    • Violations of the requirement to conclude a processing agreement with all data processors that are engaged by an organization
    • Violations of the requirement to keep a record of the processing activities carried out
  2. Infringements that may be subject to administrative fines of up to 20,000,000 EUR or 4% of the total worldwide annual turnover of the preceding financial year, whichever is higher:
    • Violations of the basic principles for processing personal data (e.g. lawful basis for processing)
    • Violations of provisions regarding a data subject’s rights such as the right to erasure, access to personal data and the right to receive information regarding the processing of personal data
    • Violation of the provisions regarding the transfer of personal data to third countries
    • Noncompliance with an order by a supervisory authority

In addition to the fines outlined above, each EU member state shall also have the right to implement its own fines with regards to noncompliance. Moreover, they may also implement criminal penalties for violations.

How Is the GDPR Enforced?

For those organizations that are based in the EU or who have a legal presence in the EU (e.g. a multinational corporation with an office in an EU member state), the GDPR will be enforced directly by the EU member states’ authorities and their court systems. For organizations that are not based in the EU and also do not have a physical presence in the EU, the GDPR requires them to appoint a “representative” who is located in the EU if the organization is actively doing business in the EU. Presumably this representative will allow the EU to enforce the regulation on such entities.

Finally, the GDPR can be enforced through international law. Written into GDPR itself is a clause stating that any action against a company from outside the EU must be issued in accordance with international law. There has been long term and increasing enforcement cooperation between the United States and EU data protection authorities. For example, there is the EU-U.S. Privacy Shield data sharing agreement which puts systems in place for the EU to issue complaints and fines against U.S. companies. In sum, there are a variety of mechanisms in place for the EU to enforce the GDPR against organizations based outside the EU.

What to Do?

If you are an organization that falls under the scope of the GDPR, then it is in your best interest to comply with the regulation, even if you are not based in the EU and do not have a physical presence there. If you are already processing the data of EU citizens, or plan to in the future, making sure your organization is compliant is good business. Putting the fines aside, residents of the EU will want to make sure that any company they are doing business with is in compliance. Moreover, the privacy and security policies and controls required will help reduce the risk to your organization. There are also potential cost savings by reducing ROT data (redundant, outdated or trivial) in terms of storage and backup costs. Being compliant may also give you a business advantage over competitors who are not.

One of the things that will likely come out of this regulation is a GDPR certification. Businesses who obtain such a certification may be able to display a certification seal on their website and other marketing material which will provide confidence to potential customers. Finally, expect your business partners to start requiring GDPR compliance even if you are not directly impacted. GDPR compliance is here to stay. Given the current events around online privacy in the United States (e.g. Facebook data disclosure), it is not inconceivable that the U.S. could also pass a similar regulation to protect individual privacy. Embracing the GDPR will only help your organization in the long run.

About the Author:Mark Baldwin is the owner and principal consultant at Tectonic Security. He has nearly 20 years of experience in the information security field and holds numerous certifications including CISSP and CISM.

Copyright 2010 Respective Author at Infosec Island]]>