Oktane19: How USA TODAY NETWORK Uses Monitoring and Reporting to Enhance Overall Security
Transcript
Details
Maggie Adams: To introduce myself, my name is Maggie Adams. I'm on our business development team here at Okta. And I help manage our Security Partner Ecosystem. And, you all are here today to listen and hear more about K, and how they use identity and security analytics within their environment to better monitor it.
Maggie Adams: And I'm really here to set the stage for how identity plays into security. So, here at Okta we talk about identity-driven security. You may have even heard it on the main stage today. What do we mean when we say that? We mean a couple things. We mean eliminating passwords and centralizing identity and access, protecting against credential based attacks through strong authentication, reducing the attack surface area with automated provisioning, and then last but not least, and the focus of today's session, enabling the visibility and response.
Maggie Adams: So, within our product today, we have basic reporting capabilities. But we also have really advanced and rich APIs that help deliver the identity context that Okta has, and pass that along. As well as APIs that help enforce response actions against a user. So through these APIs, we have very robust integrations with security analytics tools to deliver that identity driven-security.
Maggie Adams: So, because you're here in the room, you know that data breaches make headlines. And thanks to our friends at USA TODAY NETWORK, we know that often credentials are the cause of those data breaches. Starting first with Uber, a company that likes to be in the news, very recently had to pay hundreds of millions of dollars in fines due to a data breach. Why? Well, their AWS credentials were posted publicly on GitHub. Going a couple years back, eBay, another tech company, had to have the majority of their users change their passwords because just a couple of their employees were breached. A couple of their accounts.
Maggie Adams: And it's not just the tech industry. Even universities have to think about preventing credential-based attacks. And this time through phishing. And, it's not just the headlines actually. If we look at industry reports, they show that three out of the top five successful attack types all involve identity. Hackers, they're smart. They're going after the weakest link in the security chain, they're going after the end user. Why find zero day network vulnerabilities when you can just use a little bit of social engineering and just log in?
Maggie Adams: So clearly, identity is something that needs to be protected. It's something that needs to be secured. But here at Okta we think that identity can be a key security advantage within your organization. So here's how.
Maggie Adams: Within organizations, there's a lot of disparate infrastructure, disparate technology, that security teams have to manage. This is everything from firewalls, user identities, servers, routers, databases, endpoints, you name it. So all of this disparate technology, security teams have to stay on top of. Right? And, in order to enforce visibility, detection, and response, they use security analytics tools to do just that.
Maggie Adams: So these tools aggregate logs and data feeds from across an organization's environment, centralizing that into one place, where they apply a lot of sophisticated analytics and machine learning to gain valuable insights. And those insight can turn into the form of alerts when potentially there are some issues that need to be resolved.
Maggie Adams: So, often these tools show up in the form of dashboards, much like this one. This is actually a Sumo Logic dashboard, and you'll hear lot more about some cool custom dashboards that the USA TODAY team has built. And these dashboards help security teams perform searches, look up past activity, and then even respond to threats. So, these are very critical tools within security environments. And really, I want to explain how Okta fits in.
Maggie Adams: So, as we know, and as I've hinted here, identity is a key part of your security stack. Okta is the common denominator across all of your users. So whether that be your employees and your partners, and your customers, all of those resources that those people are accessing. Whether it be applications, cloud applications. As well as all of those devices. Mobile phones, laptops, et cetera. Identity is the denominator here.
Maggie Adams: And Okta, being the closest to the user, we have all of that rich identity context on that user. We know who they are, we know the applications that they have access to, as well as all of their authentication activity. And again, through those APIs, we can pass that along to whatever existing security analytics tools you have in your environment. They can aggregate this with other feeds across your environment. And then, based off of whatever actions and alerts that they detect, they can then use Okta as the control point against a credential-based attack. So this is enforcing remediation actions like performing step up authentication, killing sessions, even forcing new credentials. So Okta here is really the control point, and working together with security analytics solutions helps close the security loop. And really deliver identity-driven security.
Maggie Adams: So, through the power of the Okta Integration Network, we work with all of the leading security analytics vendors. Everyone from AlienVault to Sumo Logic, Splunk, everyone in between. And, we have a lot of great customers that are leveraging and taking advantage of these integrations. One of, I think, the most interesting stories is how USA TODAY NETWORK is doing that. So with that, I'll introduce the team.
David Snyder: So we're here to talk about monitoring your Okta environment, and how we at USA TODAY NETWORK has done this to enhance our overall security.
David Snyder: To give you an idea bout who we are, the USA TODAY NETWORK is not just the iconic USA TODAY. We're also more than a hundred local media organizations, reaching over 125 million people every month wherever they are. Mobile, digital, print, and we're consistently ranked in the top four of Comscore's news and information category.
David Snyder: Specifically here today are myself, David Snyder of our Identity Management Team, my colleagues Mike Shanahan, also of the identity management team, and Larry Gilliam of our security team.
David Snyder: To give you an idea about our environment. We have approximately 16 thousand users. We have over 350 apps, and more every week in Okta. Representing 136 news and media businesses and 174 sites in seven countries. And basically wherever the news is. Within our organization, legacy has kind of a near and dear meaning. Our parent company was established in 1923, also we have a 1906 and then 1982, it all depends on how they're writing trivia questions what the real answer is.
David Snyder: So to talk a little bit about why we're protecting, and what our plan is. Give you an idea of our value, how we perceive our value. There's a test for legal ethics that says you should never do anything that you wouldn't want printed on the first page ... On the front page, excuse me, of a major newspaper or news outlet like USA TODAY. The core of this is the idea of public trust in news and news reporting.
David Snyder: As guardians of the first amendment, USA TODAY is in the bullseye of media bias. We're basically in the middle, left right. And we ride on a very critical edge between news reporting and fair analysis. This is a position we hold and guard, and work very hard to maintain.
David Snyder: Why our protection matters. News is public information. But how we present it is protected. Is something that we need to protect. The front page headlines on USA TODAY is our second most valuable asset. We're actively attacked routinely by people who want to exploit public trust. Either presenting their own version of a headline, or just to reduce the trust in what we do. I'm not saying that people would travel through time to affect our headlines, but it makes for a good story.
David Snyder: Our most valuable asset is our staff. Obviously our identity, we do utilize Okta. Our reporters and our editors are especially targeted routinely. And then, associated with this, is also our financial resources. And if I have to explain to you why we protect those, you're in the wrong room.
David Snyder: Challenges. It's not always a challenge, it's a little bit of what we're actually striving for. We have a global and diverse work force. As I said, we're in a bunch of countries, we're in a bunch of cities. We don't just do news gathering. We also do advertising, community involvement is very central to our brand and what we do, and we do everything that is necessary to deliver those services to the communities we serve.
David Snyder: Bring Your Own Device, Internet of Things, and various other three letter acronyms have shown us that device-based policies are not some silver bullet that we can use to protect what we do. And of course, we consider that incidents happen. It's important to us to improve our incident response life cycle, and reduce workforce disruptions and outages to our customers ... Well, zero outages to our customers of course.
David Snyder: Our general strategy, when it comes to identity, is to maximize the use of Okta. Okta's a very good product, zero trust methodology. Regarding identity as the new frontier. We require a hundred percent of MFA for our workforce. We try to bring in as many apps as possible, as I said, we have 350 as of last week and counting. We secure those apps, utilizing single sign on to the maximum extent we can. We eliminate local and shared accounts. We do have certain risk-based authentications that we manage, and that we demand additional authenticators, MFA, because of our one hundred percent MFA policy, when we're outside of our accepted risks. Basically, bring it into Okta, have good policies, enforce good rules, say no to people who say, "Oh, but I have to."
David Snyder: One of the ways that we accomplish this is, as I've described, two of us are from the identity management team, and one from our security team. These are separate teams, but we have a common goal. So, it is important to collaborate early and often when we're going down the path of writing policies. Try to overcome our silos. Our identity management is a security operation. Identity management is the first, last, and only line of defense in a lot of situations. That whole thing about compromised credentials in the opening slides, anything that a compromised credential can do, the bad actor can do.
David Snyder: Our security team works very closely with identity, asking us routinely how we can support their efforts and how we can better make their efforts work. They sort of have to because we control whether or not they can log in to their email.
David Snyder: Our Okta methodology, while we are maximizing the use of Okta, as I said, we write our policies with a security first mindset. That third bullet on that list of how credentials are used in compromise was privilege elevation. When you write your Okta policies, you're giving authorization. If you write your policy in a way that allows more privileges than the identity merits, that's ... You're waiting to let somebody in.
David Snyder: As we talked about, the zero trust model of Okta, as security, identity being that new frontier. That's going to be covered a lot, I'm sure, during the Oktane, as it has been before. Consider the idea of your Okta deployment as a jewelry store. Your policies allow identified users to come in and do certain things. They can look around, try on various pieces of jewelry, check out the toussaints for whatever ball they're going to. The apps are the jewelry. Your cases are locked by policies, but you let people into them according to your policy.
David Snyder: So those sign on rules, they all work perfectly all the time, right? You don't have to check anything, you don't have to know. No, of course not. The policies might be badly written by yourselves, honestly. And so, you have to monitor. You have to look at what's going on in your environment. The monitoring and security means doing things about what you see. But you also have to collect that data to review it.
David Snyder: Our strategy for monitoring is to detect bad actions and collect forensics about what they did so that we can respond. But we also have a two pronged approach, we're not just doing event-based monitoring. We're also looking at our environment in a state-based way. We're collecting metrics so that we know what a normal environment looks like, and we know how much it changes. This also allows us to do good planning so that we can see what will happen if we make a change, have a good idea of who we're going to affect, how we're going to affect them. And of course, all that combined gives us a really good posturing to react to either bad policies, when we find then, or bad actors when we detected them, so that we can fix it.
David Snyder: Our friends at IBM did a survey last year. The result indicates that not only are data breaches expensive, they persist for a long time. 197 days on av ... The mean, I'm sorry, not the average. The mean, the 197 days to detect a data breach. That's a failure in monitoring. And then of course, the time to contain the breach being 69 days, that's pretty much a failure of understanding your ... To have it be that long suggests a failure of understanding your environment and be able to react and modify quickly.
David Snyder: When it came to product selection, there's a lot of products that do log aggregation. Sumo Logic is not the only way to do this. Other products do work. We have used some of these other products in the past. We recommend that you do your product selection based on what matters to you as a business or an agency. Consider the function of the product over the brand name. We are going to be talking about Sumo Logic because that is what we're using for our log aggregation right now, and we do have some really cool dashboards that come with that. We're also doing API monitoring ... Or, excuse me, API reporting, using tools that we've even developed in-house.
David Snyder: So to talk about how we did it, with our Sumo Logic integration, I'm going to hand this over to my colleague Mike Shanahan to talk about log collection analysis with Sumo Logic integration.
Mike Shanahan: Cool. Thanks, David.
Mike Shanahan: So when monitoring your environment, Sumo Logic's going to give you an event-based view. It shows you who did what and when. API-based reporting's going to give you a state-based view. If, for instance, you want to know how many of your users are using Okta to verify for MFA, you use an API-based report. We're going to talk about both approaches.
Mike Shanahan: In this section of the talk, we're going to talk about how to get the data into Sumo Logic, and what the default installation gives you before you do any custom development. Later we'll talk about custom development.
Mike Shanahan: In order to get your logs into Sumo Logic, you'll need an intermediate server. The intermediate server logs into your Okta with an API token, and it ships events into Sumo Logic. Your API token's stored on this internal server, it's not stored up in the cloud.
Mike Shanahan: In order to get up and running, there's about four steps that are required. These are, of course, for Sumo Logic, but if you're using another product, the steps will be basically similar. First, create a read-only API token in Okta. The best practice is to set up a separate account for each token. This lets you scope permissions and monitor accordingly. If the admin leaves the company and our account is deleted, all the tokens they issued break. So all the apps that were relying on it will break. So, use separate accounts.
Mike Shanahan: Next, you download and install the SumoJanus packages from Sumo Logic. This is where you put your Okta org name and the API token. Third step is to set up a collector. The collector is the part that sends the data to Sumo from the intermediate server. You'll need an API token from your Sumo Logic admins. And then fourth, you set up a scripts source. The scripts source controls how often Okta gets pulled for events, and what index in Sumo that then get put into.
Mike Shanahan: So, once installed and events are flying in, you can add some of the default dashboards from the Okta app in the app catalog within Sumo Logic. These dashboards show you the capabilities of Sumo Logic, but they're likely not really going to meet your needs too well. But they're most useful as a starting point for custom development. And each widget in the dashboards, you can click the option, "Open and search," and you can view the source for how the data was queried and presented.
Mike Shanahan: So we're going to go through a couple of the default dashboards pretty quickly. And see kind of what you get out of the box. So, the administrator's action dashboard lets you see what your admins are up to. From it you have panels showing logins to the admin console. Creation of user accounts, creation and deletion of the applications. The user activity dashboard's going to give you an overview of what's happening with your users. The most useful things to watch here are password resets, password updates, and user lockouts. The geotag heat map is pretty useful for quickly seeing unusual logins as well.
Mike Shanahan: Failed login dashboard is useful for detecting attempts by unauthorized users. Again, this has geolocation data, which is valuable if you know where your users are coming from. The app login panel shows which applications are getting the bad login attempts. In my experience, top ten lists tend not to be the most useful way to present data, but in this case, knowing which applications are getting hit is pretty helpful.
Mike Shanahan: The application access dashboard gives you insight into what applications your users are using. It also has information on failed access to apps. This dashboard's a good event-based view of what's happening. However, API-based reporting will give you a better view of your applications for purposes of monitoring licensed accounts and knowing if the right people are assigned to the application. They're really the things you're going to get asked about more.
Mike Shanahan: There's a couple of other dashboards there by default. But now I'm going to turn over to Larry Gillam on the security team, who's going to talk about the incident response dashboard he built for them.
Larry Gillam: Thanks.
Mike Shanahan: Cool.
Larry Gillam: Hello everyone. Today I'd like to share one of the custom dashboards that we built at USA TODAY NETWORK. It's our incident response account activity dashboard, or IRAD for short. Now, what originally caused us to need to build this dashboard was an incident that we investigated related to a compromised email account. Something that we hadn't seen since implementing MFA throughout a hundred percent of our organization. But one of those sneaky, smart bad guys figured out a way to bypass our MFA policy by utilizing email clients that don't support modern authorization. One area where you can still see email clients like this in wide use today is on many mobile devices. Although mobile device vendors are beginning to improve in this area and updating their default mail clients.
Larry Gillam: So, as you can see, we found ourselves in a position where we needed to quickly respond to a new threat. We needed to improve upon our detection capabilities to allow for a faster response, as well as minimizing our analysis times to provide quicker containment. We identified two core pieces of functionality that we thought were required in order to achieve this. The first being the ability to coalesce multiple logs into a single data source, or a single data stream. And the second being robust data filtering capabilities, so we could drill down into that stream based off of IP addresses, account names, event results, or any combination of those.
Larry Gillam: So here you can see our IRAD dashboard. We used a minimalistic design approach to reduce development time, and I was able to build this in roughly an hour. But let's drill down a little deeper into some of the components.
Larry Gillam: Here you can see our account activity over time component. Now this represents the total number of events from a certain log source at a specific point in time. Where I found the most value using this is in conjunction with the data filtering capabilities when I drill down into a specific account, or when I'm searching for a specific event across all accounts, like maybe failed MFA attempts.
Larry Gillam: And here we have our IP address metric component. Now this is really one of my favorite components because of all the time it's saved me during analysis. We create a distinct list of the IP addresses that we've witnessed, the number of times we've witnessed them, their geolocation, and we also additional meta data, which is the calculation of that first and last seen time. And that's really where the majority of the value in this component comes from, that first and last seen time. If we're able to determine that credentials were harvested at noon, and we review the dashboard and we can see that all of the IP addresses associated with that account were in use several hours, several days, several weeks, before that time, then with a reasonably high level of certainty, we can deduce that it's not an IP address being used by a malicious actor.
Larry Gillam: And finally, we have our account activity log events component. Now this really represents that single data stream concept that I was talking about earlier. And the way that we achieved this was by reviewing the different log sources, and determining the relationships between some of the data. All of the log sources have IP addresses, account names, geolocations. But, all of the fields within those different log sources use different names. So we created a new, unified list of field names, and we mapped the current fields from each of these log sources to that new, unified list. Then when we returned our data set back, and sorted it chronologically, it provided this coalesced effect.
Larry Gillam: So, what are some of the benefits to building your own custom dashboards? I think one of the first benefits that I noticed was just getting an overall better understanding of all the data that we were aggregating into our Sumo Logic platform. I began to see information within the data that I didn't previously know was there. As well as relationships between some of the different log sources. And these two things, in combination, started to give me new ideas about better ways that we might be able to utilize this data. Particularly from a security context.
Larry Gillam: One of the other benefits is just the ability to combine and utilize multiple log sources in the same dashboard. A lot of the default dashboards that you'll see, that are developed by vendors, focus on on specific log source. And the reason for this is because they really have no way to know what all different log sources you're aggregating. They don't know if you're an Office 365 shop, or Google G-Suite. They don't know if you use Cisco hardware or Palo Alto. And even if they did, they would have to build different dashboards for each of those scenarios.
Larry Gillam: But you, on the other hand, are very aware of the different log sources that you're aggregating. And by adding additional log sources into your dashboards, it can provide additional detail and context, which a lot of times in the security arena, can equate to faster analysis with fewer false positives. But probably one of the biggest benefits is being able to focus on analytics and data visualizations that are most relevant to your organization. All of our organizations differ in the geography of our attack surface, or where our most critical assets reside. Whether that's in the cloud. And it's these types of differences that can provide unique challenges. Challenges that are sometimes best solved by custom solutions.
Larry Gillam: So what are some of the lessons I've learned? I think, one of the lessons I learned was whenever I start a new dashboarding project, to begin by defining my objective so I can maintain a clear focus on what I need to accomplish. I noticed when I do that, the purpose of my dashboard is clear and more obvious to my colleagues, which has generally lead to wider adoption, and a greater appreciation. But when I fail to do that, a lot of times my dashboards become data-driven. And can present needless information, or an overabundance of what I like to refer to as eye candy. And, although they look awesome when you walk past my desk and see them up on one of my monitors, they're rarely used by anyone other than myself.
Larry Gillam: But, probably one of the bigger lessons I've learned is to always search for opportunities to improve the value of your dashboard through data enrichment. Now, this can be just adding an additional log source, or appending additional meta data, like we did in our IP address metrics widget. But you don't have to just stop there. A lot of the log aggregation platforms that I've worked with have the ability to map to external data sources. And a great way to use this capability is to pull in external open source, or commercial, third party threat intelligence. Or, even better, some of the threat intelligence that you've been able to build within your own organization.
Larry Gillam: So to summarize, we identified a hole in one of our security policies. Even though we're a hundred percent MFA, it was possible to bypass that by using email clients and legacy authentication. Using our dashboard, we were able to provide proof of exploitability and show, in a very clear to see fashion, step by step the actions of a malicious actor. We disseminated this information, up streamed to our management team, who had all the information they required to be able to approve a wide sweeping policy change. Enforcing that all email clients utilize modern authentication. And thereby, plugging the hole. And we were able to accomplish this, and measure our detection time in hours, and not weeks. And our containment time in days, and not months.
Larry Gillam: And with that, I will hand it back over to my colleague, Dave. Thanks.
David Snyder: So pretty good information about the dashboards and log aggregation. But I also said we were going to talk about state-based monitoring and the Okta API, and how we use it for metrics, planning, sense of scale. Using the Okta API requirements for routine monitoring is similar to log aggregation. You have to have an API token, read-only administrator is sufficient. We recommend you put this on a service account whose only purpose is this kind of monitoring. As Okta best practice, you want to have service accounts with their own API tokens separate for separate purposes. So should you lose one, you don't lose them all. Anything happens to this account, the service fails for a while and its API token gets revoked, we don't lose our Sumo Logic because that's a separate service account with a separate API token.
David Snyder: Also, its own server. Why its own server? Same reason. Something happens in the hardware, or in the infrastructure, or in the BM file that is actually running it, we only lose this one. We don't lose them both.
David Snyder: You also have to have somebody consuming the data. It's got to be somebody's job to receive any kind of routine reporting that you do, and they've got to be interested in the data. And aware of changes and what they mean. We also very highly recommend instructor-led training in the Okta API. If you invest in your people, you improve your employees' skill set, you're going to be able to do things that you wouldn't be able to do otherwise.
David Snyder: Yeah, outsourcing is one of the things you can do to accomplish this. But, somebody who knows how to do this on your team, who understands your business, is going to produce you better results. We routinely get asked, "Can I find this out from Okta?" And we routinely answer, "Sure. Give me a couple minutes." Or couple hours, actually, in the case of the last one.
David Snyder: We do some metrics reporting. One of the reasons that we do this is we want to continuously improve our policies. We want to change something. How big of a problem is it to change it? How many apps is this going to affect? How many users is this going to affect? What users is this going to affect? These are the things that you can only get from state-based monitoring. Queries to the API.
David Snyder: Our Okta rep says we're using these many licenses. Are they telling us the truth? Can we verify that number? Yes, yes we can verify that number. And I will tell you from experience, your Okta reps are not lying to you. Every week we run a user MFA registration report as one of our examples. What methods are popular? What users have fewer than two methods registered, and what methods are they using? We also run an app metrics report. How many apps did we add this week? How many are active? How many are not active? You actually see samples of these reports right here, and you see a steady climb in our app configuration. And also a sample of our how many methods report.
David Snyder: What happened here? This was actually a good example of the difference between event-based monitoring and state-based monitoring. Around that time, we changed our method. I used to generate this report by pulling in the events of MFA logins. And, counting for unique values. You see that that was off by a factor of 50 percent. At that time, we switched this to a state-based report, and we were able to get information about the real registrations, not just the usages. We also see at that time the ability to detect zero users. Our users with zero methods. If you're only doing events, you can't see negatives. And scientific negative, the absence.
David Snyder: Another report that we do is an on-demand, when was the last time an app was used, by whom? I've got all kinds of ways we can target with that. And the same thing, the Sumo dashboard got us a good snapshot of the users who had used the app. What it was incapable of telling us was the users that had never used the app. And so, by doing this state-based ... But even there, that state-based report also queries the events. Another example of how you can do log reading with something other than just your log integration tool. This was actually purely an API thing. It pulls the users who are assigned the app, and then it looks for those users and that app in the logs. And reports the latest time for each. It gives us a good sample of how many users are using it, when was the last time they used it, and more importantly, which users are assigned this app and never use it?
David Snyder: Okay, so we've got all this data, we've got all this coming in from different ways, what do we do? Well we've talked about this a little bit, we build new dashboards, we develop new reports so that we can better understand our environment. Which is kind of data for data's sake. But the other thing that we do is we instill confidence from our leadership. We answer every question they ask with, "I don't have a method for that yet." Yet is the important word, because actually more often, we're now saying, "Sure, here you go. Oh it's going to take 20 minutes to run." Or, "An hour to run," depending on how it is.
David Snyder: This is important because when we request a change that's going to impact our users, our executive leadership has confidence in what we tell them the impact's going to be. And that we have the ability to mitigate it. And the people actually doing the changes have considered how to mitigate any impact that's going to happen. Information-based decisions making, that's the concept.
David Snyder: Back at our metaphor store, an example of policies. Did you leave that display case open? We can examine our policies, look for potential flaws, repair them. We can make policy changes knowing ... Based on threats we actually receive, or based on threats we perceive. And we have a good idea of what that impact's going to be. We can inform the precise users changes will affect, and when we make changes, we can know whether or not they were effective.
David Snyder: What's next for us? Continue to answer the problems we encounter with good information collected from original sources. This is an import ... I don't like to read slides straight. But this is an important concept. It's not just what we do with our monitoring. It's also what we do as a business. News gathering is collecting good information from original sources.
David Snyder: Part of that is we put in Okta feature requests. This is one of our favorites, adding a not operator to the system log. The system log in Okta is another place that you can go and look at logging that happened within Okta. It is only one source. I'm going to leave this slide up so that you can ... All you admins that are in the audience can go to that link and up vote this. We'd really like to see this one. And of course, the important thing, as an admin is to answer questions that start, "Yeah but," and, "How did?" So that we can change, we can improve integration, and we can just improve everything that we're doing.
David Snyder: And with that, that concludes the formal portion of our presentation, and we are now open to questions. Oh, I do have one other slide, now that everybody's taken the feature request. These are the links to Sumo Logic for when you do want to do your Sumo Logic integration. This is where you go to get the packages that you need. And I'll leave this one up while we open the floor to questions. And we just have a few minutes.
Speaker 6: So, if for some crazy reason your crazy leadership asked you to pull out Sumo Logic and replace it with another log aggregation platform. How much work would that be?
David Snyder: Depends on the platform. We swapped from Splunk, actually, to Sumo Logic. The actual setup for Sumo Logic to Okta took about an hour, I think, was our ... Of the actual, you know, hands-on time to set up. The actual rollover from one product to another. It usually takes longer to sign the contract to get the product than it would to actually start feeding it information. How much you want to log aggregate, all the other systems involved are the ones that you actually have to go to and configure them to import. But, the Okta portion of it, depending on the product, I mean Okta integrates with everything. So, it's very quick to move the Okta logs from one product to another, assuming that it's got some kind of reasonable integration.
Speaker 7: So for 350 plus apps, do you leverage Okta for authorization as well? Or just with MFA?
David Snyder: We are leveraging Okta for authentication and authorization to the fullest extent possible. It depends on the app whether or not we can authorize with Okta past the login stage. MFA on our app-based policies are set on the sign on policies for apps, of course. And we are doing that to the maximum extent possible. We are leveraging desktop SSO, and services in certain environments. But passed that, we're always looking to expand our footprint. So any possibility to extend our MFA use, even to server logins, is actually one of the things that we're looking at with the product announcements this week. So, yes.
Speaker 8: In our organization, we're debating how much work to put in to failures. In terms of people who try to login to applications, whether they're running scripts. Who try to login, or a bot or something like that. How have you used that information that you gained in your dashboards to work through what's happening with your failures to login?
David Snyder: Mostly we evaluate it, as Larry was showing, with the incident response dashboard. And also, with the default dashboards. That does give us login failures. So, when we see high quantities there, we can examine those things forensically. To determine basically what we believe the course is. We have utilized Okta's blacklist feature as a result of failed login attacks in the past. I mean I can ... Do you want to speak to how exactly we react?
Larry Gillam: It's a common problem, it's difficult. I have invested a fair amount of time going through and doing analysis on login failures, like you're mentioning, to discover maybe an administrator that had scheduled a task and changed his password, but didn't change it within his script. So it is a difficult process. One of the ways that we try to address it, I guess, is through the custom dashboard I showed, with that IP address metrics. That allows us to really quickly determine whether it's an IP address that they've been using regularly, or whether it's something new. And that kind of determines the level of investigation that we do. So in a lot of instances, within seconds, we're able to determine that it was just a script like you mentioned.
Speaker 9: Brilliant. Thank you Larry, Mike, and David with USA TODAY.
Larry Gillam: Thank you everyone.
With more than 100 local media organizations across the nation and USA TODAY, USA TODAY NETWORK can be a target for bad actors attempting to exploit its iconic brands. Security is never easy with a diverse, global workforce of reporters and other employees operating on a multitude of devices. However, an identity-first security posture, backed by Okta and Sumo Logic, is focused on in-depth monitoring and reporting of detailed metrics around usage and activities and helps USA TODAY NETWORK carefully assess incoming threats and respond accordingly. It’s important to keep bad actors out—but it’s equally important to understand how they’re trying to get in, and USA TODAY NETWORK is leading the way, leveraging its core discipline of careful reporting to help future-proof their security strategy.