Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For the following techniques:

1.6 Behavioral, Contextual ID and Biometrics & 7.4 User and Entity Behavior Analytics - focus monitoring/auditing on accounts that all of a sudden transfer 10GB data when they usually only transfer only 100MB/day, or where the employee has had to be asked for that one time of the year to login on a weekend at an office they don't usually visit.

5.1 Data Flow Mapping - detect unexpected egress of data by defining ahead of time the volumes of data being transferred between systems (e.g. 2AM backup transfers 100GB to systemX and between 9AM-5PM there is a usual data transfer rate of 1MB/s therefore 100MB/s transfer rate at 1PM would raise an alert).

How well do these techniques work in practice, particularly in a huge organisation? I would have thought the number of false positives would be very high and the people monitoring the anomalous behaviour wouldn't have much or any context to know whether something is legitimate or not.

A more feasible approach may be system owners installing a new system would have to specify rate limits (including per time of day, per API call and/or per user) and would have to lodge as part of a change request whether these limits need to be temporarily increased to cater for a one-off or rare event such as a major system upgrade. But given that some of the other techniques listed indicate a lack of awareness of what software is installed and is in use, it seems unlikely that specification of rate limits would happen any time soon.



> How well do these techniques work in practice, particularly in a huge organisation?

At $dayjob I have access to the Azure Active Directory "unusual user activity" security alerts in an environment with about 15K staff.

In my experience, the reports are accurate, in the sense that they get triggered as advertised by mis-use of IT resources. However, 95% of the time it's just staff being "naughty" instead of actual hackers.

For example, the "impossible travel" one gets triggered regularly. About 80% of the time it's because someone forgot to turn off the VPN they use to watch foreign Netflix. The other 20% of the time is because they were sharing credentials, which is against every IT security policy ever.

Even just the use of a VPN by itself is red flag. VPN providers are notoriously untrustworthy, many of them teetering on the edge of being outright malware. Certainly they all collect far too much metadata and sell it to the highest bidder. No such VPN product has any legitimate use on a corporate device.

Not to mention that corporate traffic is now looping out of the country into another country and back for no good reason...


>they were sharing credentials, which is against every IT security policy ever

You’d like to think so, but...


My experience is that no matter how much training one has taken into that regard, any IT slowness to provide credentials on time for critical project activities, will eventually end up on that behaviour.


This might sound silly (as I have no experience in the field), but if this problem is due to the "largeness" of an organization (say "large" being N = 300, for the sake of argument), then presumably it's because certain false positives become more frequent as they have more employees than N, which causes the security monitors to tune out similar behavior, correct?

So instead of hiring the same constant number of security monitors/auditors independently of the organization size, can't you "solve" this problem by hiring one per every N employees, and having them monitor solely that group?

It sounds to me like either this solution should work (a "large" organization should be able to afford this!), or the problem isn't really related to the size of the organization.


Org size can cause some amount of fatigue, sure. But reliable security engineers are few and far between. Not necessary scale-able to org size.

Additionally, security is one of those things it's hard to get C-suite execs to pay attention to and spend money on. When it's working, you don't notice it. And if your 5 person security team has worked fine for the past 5 years as the org has grown, why should you suddenly scale it up to 10? Everything is fine! That is...until it isn't. But rarely do people think ahead like this.


My thoughts on complexity are more in relation to large organisations having more areas of business, each with their own set of software in use. For a multinational company which frequently acquires other businesses, perhaps they end up with 10 of everything--10 HR systems, 10 finance systems, etc. And then on top of that a number of other systems for generating reports from the 10 finance systems, polling and searching data from the 10 logistics systems, etc. The DOD would be a nightmare of complexity and most likely the global leader in complexity.

Defense Logistics Agency in 2018 reported 264 applications in use, which was down from 1200 in 2013[1]. DLA only represent about 0.9% of the total DOD workforce[2] and it appears the application reduction could be due to consolidation of applications together, making application counts appear better but not doing much to reduce true complexity.

Given many systems have a single subject matter expert (or sometimes none), what is a cyber security analyst responsible for 100's of applications going to be able to reason about an event raised from a system with a name as vague as "DLA Action Item Tickler Report"? Was Bob OK to send the XY12 report via e-mail to Oscar? Who knows? It's likely no one has asked that type of question before and the only person who knows enough about the system to answer the question is the person who caused the event to be generated.

Some organisations are quite simple (despite having similar finances and employee counts) because they largely consist of simple repeatable processes performed by 1000's of employees using a single software product. Some organisations (particularly the DOD) are very complex because every employee is doing something unique across 100's of different software products.

[1] https://www.dla.mil/About-DLA/News/News-Article-View/Article...

[2] https://en.wikipedia.org/wiki/United_States_Department_of_De...


One aspect that changes with the scale is the distance (both in org-chart and physically) between the monitor and the monitored - in larger organizations the monitor has less useful context (and less ability to get it) to evaluate the circumstances of some alert.


Ideally in a system like this, you don't just look at a single user and a single point (e.g. 100x data transfer in a day), you build out scoring based on a multitude of factors and across related agents/assets, and hopefully you have enough training data to account for irregular or seasonal occurrences.

For one part of the org, occasionally doing 100x data transfers may not be out of the ordinary, while for others it may be exceptional. But it might be anomalous if its 100x data transfer and hitting services no one on your team has ever hit (indicating perhaps scanning/scraping and exfiltrating data, but you don't need to explicitly specify that as an alert).


Let's remember the level of classification here though - in a usual org an occasional large data transfer might not be worth investigating, but in some contexts within the DoD it's certainly worth investigating.


And although size of transfer may be a valid indicator, not all valuable data is large in size.


I work as an Information Security Officer in a corporation with about 100.000 employees. The outlined reasons in this post are what makes these capabilities effective in our org as well.


A big problem is insiders selling information or ransoming it under the guise of a breach. Employees are the single greatest threat to any organization so behavior analytics is starting to become really big.

As I tell people, "We don't care if you browse Reddit, we only care if you start doing things an employee shouldn't".

But to answer your questions we would just ingest those alerts into Splunk, build a KB on how to handle the alerts when they trigger and then begin the process of filtering out the noise. The SOC Analyst who works these alerts will get numb to them but still pick out the unusual ones to investigate.


I'm curious are there any open source "behavior analytics" projects that are gaining traction?


No, you need too much data and it’s too specific to the environment.


> just ingest those alerts into Splunk

Aah yes, Splunk....

Good tool, but realistically only viable for those working for a corporate-sized employer with corporate-sized pockets filled with $$$$$$$.

Most people can't afford to simply dump it all into Splunk and (if they are using Splunk at all) have to pre-filter first. Which kind of defeats the point of Splunk if they're already doing the hard work outside, and so might was well use some cheaper (F)OSS tool.


Working on a startup to basically be 75% of the features of Splunk at 50% the cost: https://log-store.com/

Right now it's 100% free, because I'm just looking for user feedback. I think/hope there is an opening in the market for folks looking for an easy-to-use, but powerful tool like Splunk, but can't afford their hefty price tag. All feedback welcomed!

Also created an open-source ETL tool: https://log-ship.com


That sounds interesting, I'll certainly give it a go when I get a chance in the near-future !


you can do the same type of work with spark, which is free


That is something right there. I am intrigued by this.


> How well do these techniques work in practice, particularly in a huge organisation?

Like this:

I log into system foo-bar-baz, which one person accesses once or twice a year. Maybe it's christmas setup, maybe it's for daylight savings time, maybe it's for a new release of Ubuntu, whatever. I have all the relevant credentials, I am responsible for foo-bar-baz, and I have an authorised change request.

Three days later, the security team sends me a slack message, asking if I accessed system foo-bar-baz.

I tell them yes.


Because they want to know if someone stole your creds.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: