Intro to Okta API Access Management with AWS API Gateway + Lambda
Transcript
Details
Tom Smith: Good afternoon, everyone. Thank you for coming today, I appreciate it. We're here today to talk, as Michael said, about Okta's API Access Management as well as AWS server list technologies. Again, my name is Tom Smith, I am a Partner Solutions Architect here with Okta and I'm joined by Patrick McDowell, Partner Solutions Architect with Amazon Web Services.
Got to start with our standard disclaimer, you guys have seen this a million times already. You could probably have it memorized at this point, so not going to spend too much time on it.
Well we're going to talk about a few things today. I'm going to start with an overview of API Access Management from an Okta perspective. I'll then go into a live demo showing API access management in action. Actually, authenticating, getting an authorization code, getting an access token. That demo is going to include not only Okta services, but some components of the AWS serverless stack as well. Then I'm going to turn it over to Patrick, who's going to talk in more detail about the AWS serverless stack.
Okay. I want to start with a very, very simple use-case. All right? Diana Nyad has an iPhone. She bought it from BigWireless.com about a year ago and she bumped into Rainn Wilson in the lobby. Rainn bet her that he could beat her in a swimming contest in the hotel pool. Don't know why he did that, but this is Vegas and that's what happens. The stakes for the contest were that if Rainn lost, he would pay off Diana's iPhone. Spoiler alert, he lost.
Now, Diana needs to log into the Big Wireless.com website to find out what the remaining balance is on her iPhone. What does that look like? When Diana goes to the Big Wireless.com website she's welcomed, first, anonymously because Big Wireless.com doesn't know who she is. She clicks on "remaining balance" and Big Wireless lets her know that she is now signed in. She clicks on the login button. She authenticates and now Big Wireless knows who she is, so it welcomes her by name and lets her know that the remaining balance on her iPhone is $249.99. Again, very simple. Very basic.
Let's go a layer deeper than that. What's actually going on there? Again, Diana comes to the Big Wireless.com website. She's a guest, so she's welcomed as a guest, she's anonymous. When she clicks on that "remaining balance" button, what she's really trying to do is get to an API endpoint. Big Wireless.com has developed and maintains their own API. The API endpoint that's going to deliver the data that Diana is looking for is /users/userid/balance. Okay? That's kind of the destination that she's trying to get to. That's where the $249.99 is coming from.
Again, Big Wireless has developed and maintains their own API, they have a team of developers. Generally those developers, as most developers do, like to focus on creating new functionality, creating new technology. Focusing on green field things, adding value to the product, adding new capabilities for end users, making the end user's experience better.
Some of the things that they don't necessarily like to spend time on are some of that infrastructure layer stuff. That would be things like throttling. You know, how much load can the API handle? How much load do we want it to handle? Logging, we need to log all of these API calls so that we can go back and do troubleshooting and do performance testing. Cashing, do we want to cash some of these results from the API calls so performance is better?
Again, those are some of the infrastructure layer things that are really kind of a headache for developers, and they don't want to focus on it because, again, they want to add value to the product for customers.
Similarly, in terms of Access Management, the developers don't necessarily want to care about who it is that's logging into the website. Who is this person? Are they authenticated and what API endpoints do they have access to? That layer of security should be abstracted from the developer's experience, so that that's all managed outside of their purview.
In Big Wireless.com's case, they're going to use Okta for API Access Management and AWS for API management. What does that look like? Now when Diana clicks on her remaining balance, she goes to Okta for authentication. When she authenticates against Okta, she's going to get an ID token, and she's going to get an access token.
Embedded in that access token is going to be a scope. Scope is an OAuth term, is basically just means permission, so a scope is permission to do something. On the API side, the security team has decided that a scope of API read is required to hit that balance end point. Now when Diana authenticates against Okta, the authorization piece is that Okta is going to maintain access token and embedded in that access token is going to be a scope of API read. The reason that she's getting that scope of API read is that, she is a member of a particular group. I’ll show a little bit more about that in a minute, how that happens.
She has the access token, or rather the application has the access token with the scope embedded. Now that the application can send that access token to AWS API gateway. API gateway then in turn takes that token and gives it to Lambda. Lambda is an AWS serverless technology. Patrick will talk a little bit more about that in a few minutes, but the idea behind Lambda is that it’s a standalone function.
This Lambda function in this case has one job and that’s to validate that access token. Is the access token still active? When was it issued? Does it have the, most importantly does it have the proper scopes for this particular API endpoints? In this case Lambda function gives the thumbs up to API gateway. API gateway then turns to the API itself and says, "It's okay to let this user access its API endpoint, so go ahead and send the pay load back to the application." That’s how Diana gets greeted by name and gets the pay load from that API endpoints. Okay, so that’s just one layer deeper.
I’m going to go into a live demo in a couple of minutes and we’ll see that from a couple of different perspectives. Then after that I’ll actually go into a web sequence diagram that’ll show you all of these steps and a little bit more detail. I’m just trying to give you a few different perspectives on how API access management works. Let's talk a little bit more about that from the Okta perspective.
Okta is fully OAuth 2.0 compliant. One of our strengths is that we have a flexible policy driven engine, so using the Okta administrative interface you can implement these policies. It will allow you to mint custom access token with custom claims, custom scopes and you can do all that through the easy to use Okta Admin UI.
What do we mean by an identity driven policy engine? You can use many different aspects of the user profile and the user context to make decisions about what should be in that access token. You can decide based on IP address, you can decide based on the users, what client users using. Most commonly what groups is the user a member of and that’s what’s happening in the case of Diana Nyad and I’ll show you that in just a minute. You can use all of those aspects through Okta to customize that access token, so very flexible, very powerful. Perhaps the most importantly, it's all done through the Admin UI, so you don’t need to write any code, you don’t need to dive deep, do any of the stuff. It's all done through our Admin UI.
For example in this case, I have an Okta tenant setup where Diana Nyad is a user. She's a member of the group 'phone owners' and in my authorization server, which lives on my Okta tenant, I’ve set up a rule that says, "Anyone who's a member of the group ‘phone owners’ should have a scope of API read." Any time that scope is requested, Okta will mint an access token with that scope embedded in it, similarly for open ID, for OIDC connections.
You can see through the Admin UI, you can modify other aspects of the access token as well including whether a refresh token is included and how long that access token should be active.
OAuth grant types, I’m not going to go too deep into these. I do want to talk about one in particular a little bit more, because I’m going to dive into that in the demo and in my sequence diagram flow, but there are four OAuth grant types. Client credentials is generally for server to server and machine to machine communications. Resource owner password credentials, that is for trusted applications and environments. I want to draw a contrast between the implicit flow and the authorization code grant flow. In the implicit flow, an access token is sent to the browser. An access token, which you know as we’ve discussed really unlocks the API for the end user. In the implicit flow that gets sent all the way to the browser and contrast to the authorization code grant flow in which case, only an authorization code. One time use authorization code is going to be sent to the browser and the access token just lives in the application. It's more secure in that respect, but it just depends a little bit on your context, which flow you want to use.
Authorization code grant flow, you may have heard the term three legged OAuth, that’s the authorization code grant flow and again I’ll go into that in a little bit more detail here in a minute. Okay, so let’s see API access management in action. Here is Diana at the Big Wireless.com website. What I’m going to do here is do a realtime flow here first, so you're going to see Diana authenticating against Okta and then getting the result in her virtual browser here, which is going to be a realtime transaction between Okta and AWS serverless stack.
She clicks on to get her remaining balance and the application redirects her to Okta. She authenticates against Okta. She gets welcomed by name and we’re validating her access token. Lambda is taking the access token and validating that it is a valid token that's been issued by the proper issuer. Now she can see that her balance is $249.99. That’s realtime.
Let's take a step back and see what that looks like more from a step by step perspective, dive into that a little bit. When I go step by step, so Diana again is trying to get to the /user/userID/balance endpoint. The scope that’s required for that endpoint is API read. When she clicks on her remaining balance, she gets a red error message, probably wouldn’t happen in the real world, but the application is going to generate an authorization URL. Okay and that authorization URL is going to hit an OAuth endpoint on the Okta side.
You can see that it's hitting an Okta tenant. A few of the important parameters there are the scope, so in this case we're going to ask for a scope of API read and open ID to make an OIDC flow and we are going to request a grant type of code. She's going to authenticate against Okta and she gets an authorization code. Showing it there in the browser, because the authorization code does get sent all the way down to the browser.
Now, the next step is for the browser to send that authorization code to the application, which in turn sends it to Okta to redeem it for an access token. We're hitting a different URL here. This is still my OAuth authorization server, but in this case I’m hitting the token endpoint rather than the authorize endpoint. Some of the parameters in this call, include the grant type. Of course, I’m going to include the authorization code itself and then I need to authorize that call with a basic hetero authentication, which is going to be my client ID and client secret for my application.
Okay, so I’ve got my access token back, success. OAuth access token, I don’t show all of it here, but in this case it's 929 characters. Some of the fields in that access token include, is it active? Most importantly, does it have the scope that we're looking for? In this case it does. It's got API read and open ID. That’s great, but you can see when it expires, when it was issued, the user name, lots of good stuff in the access token. Again, most importantly it has that scope that we're looking for to hit that API endpoint.
Next, the application is going to take that access token and send it to the API endpoint through AWS API gateway and hopefully get a data pay load at the end. This will happen quickly, but I’m going to dive into it in a little bit more detail in the next section. Okay, so Diana has got her balance of $249.99, so that called from the application through API gateway, through Lambda back to the browser.
I want to give you a slightly different perspective on this as well. This is a sequence diagram. It’s basically going to illustrate what I just showed realtime and in that step by step process. Diana started from the browser and she asked for a protected resource from the API. Okay? I’ll throw in a little OAuth terminology here in terms of the client authorization server and resource server, just so you can see what some of the players are doing.
The application checked to see if Diana had a session or not. Does she have a local session in the application? No. The application redirects the browser to the authorization URL, which is on Okta. Okta asks the user to log in. Diana logs in with user name and password and of course, through Okta you can also add an MFA layer onto that as well, if you want to have that policy applied. Then Okta sends the authorization code down to the browser.
Now that the browser has the authorization code, it sends that authorization code back up to the application, the OAuth client in this case. The application sends it to Okta, which is acting as the authorization server. Okta sends back an access token and an ID token, in this case. Now the application has the access token, that’s where the power is.
The application sends the access token to API gateway. API gateway has been set up with Lambda, so it’s going to use Lambda to validate that access token. Is the access token valid? Yes, the access token is valid according to Lambda. Lambda gives API gateway the thumbs up and then API gateway tells the API that it’s okay to send the pay load down to the application and down to the browser.
That’s Okta API access management as well as a little bit of a deeper dive into OAuth authorization code grant flow. Hopefully that gives you a little bit more perspective about how Okta API access management along with AWS can add more value to your technical ecosystem. That being said, I’m going to turn it over to Patrick, who is going to talk a little bit more about Amazon web services. Thank you.
Patrick: Are we on? Hey everyone. I’ll reintroduce myself, my name is Pat McDowell, so I’m also a partner solutions architect like Tom. I’m at Amazon web services though and my focus is on our security partner ecosystem. I work of at security partner of all shapes and sizes to help them bring their products to AWS. Before I dive in to this, who has used any of AWS’s serverless products here? I mean either it's Lambda or AWS API gateway? Great. Who has not used it? Maybe that’s a better question, okay. Who's integrated with Okta? Okay, hopefully we get more hands up after that.
When we talk about serverless at AWS it’s kind of a sweeter product. Basically API gateway and our serverless products AWS Lambda, but what does serverless mean? I want to talk about what I consider the serverless manifesto. First and foremost as the name implies, there's no such thing as provisioning VMs or containers or machines anymore. I’m going to say this a lot today, but we want to empower developers to write code and just have it run. They shouldn’t have to worry about infrastructure; they shouldn’t have to worry about if they have a VM available.
Serverless means you also scale of usage. It doesn’t matter if you're doing 10 transactions per second or 1,000 transactions per second. Once again, you shouldn’t have to think about that. With serverless, capacity planning does not exist. You always have enough capacity, you're never over provisioned or under provisioned.
You’re also never paying for idle, right, so you’re typically waiting for an event and you get these new paradigms called event driven computing with serverless. We literally build by the millisecond at, for AWS Lambda. If you came to Amazon and you say, "I need a half second compute, will you sell me some?" Yes, we will because we're Amazon, we'll sell you anything, even compute by the seconds.
As always with everything AWS, availability and fault tolerance is built in the platform. This is a fully managed service, it's built across multiple geographic regions or rather availability zones, which could be also multi region. Serverless is all sorrel. We released data based Lambda in 2014 and this is not just for debian test, right.
Large companies and small companies are using this across any industry vertical you can think of. You see some house hold names like Coca Cola, Major League Baseball and Comcast, all using this in production. You also see some startups like Airbnb and Instacart who are, you know really driving forward this event driven compute paradigm that you're hearing more and more about.
You're also, you know people think this is for debian test or little little apps that don’t really have mission criticality to them and that couldn’t be further from the truth. Thomson Reuters, you know had a, near, the horse power to scale to 4,000 transactions per second and serverless was able to do that for them. FRA, who is the Federal Regulatory Authority, who is responsible for analysis every single stock trade on every single market on every single brokerage every day, uses serverless to look for fraud, like insider trading on every single stock transaction. That’s 1.2 trillion stock transactions today. It's hard to get more mission critical than that.
They have some media sites like Vevo. When they upload new video content, it's not like a 2X demand increase, it's not like a 3X demand increase, it's an ADX increase in like a very short period of time and serverless has scale for that. Once again, they just want to write code and have it run and not worry about anything else.
What does a serverless web application look like? I keep talking about API gateway and Lambda and there's a couple of other pieces to that. They found API gateway as Amazon Cloud front, which is our CDN. Our CDN is distributed across 79 different points of presence in the world right now.
When you deploy an API gateway, it's actually close to your users all over the world than the 80 geographic metro areas. Then when you build your application, one part of the serverless manifesto is, don’t store persistent data in a serverless app, because it is stateless. You use something like Amazon S3 and have that front to Amazon Cloud front and of course, right behind API gateway right there, which is the central authority that routes traffic to Lambda or to another back end, that is where Okta hooks in, right? You can use a custom authorizer on Amazon API gateway to do that.
Now last but not least in the serverless stack is Amazon Dynamo Db, which is our none relational database service. One of the great things, why that works with serverless so well is because now that auto scales for you. You don’t have to know how much through put you need as your compute scales and your API gateway scales or request so will your database back end. Of course you don’t have to call, its arbitrary computes, you can call a third party API or different data back end as well.
What does API gateway infrastructure look like? As you can see on your left we have mobile users, web users, service endpoint. They're all hitting Amazon Cloud front, in the front. They’ll hit API gateway, API gateway, but some when those, if it’s a cash request, it will unlikely send that back to the end client from the nearest point. With API’s, you also want to know about success rate, failure rates so you want to see your two hundreds, your four hundreds or hopefully not your five hundreds errors in your logs. It integrates with Amazon Cloud watch for monitoring.
We talk about Lambda a lot, right and it can call Lambda, but it can also call many other different services like classic vanilla EC2, container services, any, almost any Amazon API you can think of or any third party API as well.
Let's peel back the onion on API gateway a little bit more. It is a fully managed service with a unified front end, people are like, "Well, I’ve already mentioned API, how difficult is it to convert to API gateway?" While we're using Swagger, right, you can export your Swagger file, upload that into Amazon’s API gateway servers and you are basically good to go. It integrates so it will manage all the scaling for you across the globe and also what we also do for you is, we provide DDoS protection by default right. Not only don’t you have to worry about compute and storage, you don’t have to worry on DDoS anymore.
Amazon shield service, it’s a free service that protects you API endpoints for you, so we don’t charge anything more for that. DDoS protection is inherent in the serverless platform when using API gateway. Authentication authorization, we have some small building blocks there called incognito, we do that for you, but the hook is right there. We allow third party custom authorizers and that’s where Okta’s identity cloud comes in, that's where Okta’s API access management comes in.
The policy engine behind Okta’s identity cloud is amazing. It can do things that our platform can't do and really extend your application to do all those things like MFA or read off the universal directory. Last but not least, you probably want to monetize your API. We have a built in monetization for that, built in throttling and metering, so that you can bill your clients, built right into our API gateway.
How hard is it to use data base Lambda? Do you have to change the way you program? Do you have to change the way you've always written code, and the answer is no. You bring your own code. Once again, you write your code and you run it. We currently support foreign languages, we have no java, python and C#.net. All your standard libraries still work.
The only thing you have to do as a developer is choose how much RAM you want, and the one caveat with serverless computing is it runs up for five minutes and stops. If you have a long running CPU job, it might not be the best choice for you, but maybe you want to start talking about a syncretist loosely coupled apps and remodeling your applications till you take advantage of the cost savings and efficiency you get with it.
It's integrated of other A to B services natively like Dynamo Db almost anything you can think of. It's also super flexible authorization. I don't mean as the third party integration of Okta. I mean when with everything in Amazon's and API which requires permission, so maybe this Lambda function that you're using can only run in dev and not test, and you only want certain users to be able to execute that computer function. Or you have very granular permissions saying, "DB admins can only run this backup Lambda cran job during these hours or in this environment, because if they tried to backup the pro database in the peak time that might cause a performance issue." You have, you can use our IN platform like extremely granular permissions.
All your third party tools still work. Visual studios, ellipse, Adam.IO Jenkins code is usual. They're, you don't have to change anything about that. Monitoring and logging, you can use Cloud Launch logs to do that, you can use other third party apps. We recently released AWS x-ray, which is our distributed cloud tracing service.
This cloud tracing service will actually map out your entire distributor application which will show you how long each Lambda functions taking, how long each step is taking. Last but not least, serverless sounds amazing, but still just computer. You might not be able to see the computer, but threads still exist, processes still exist, socket still exists, the file system still exists. All those things are still at your deposal, you just don't have to worry in management and more.
Finally, serverless is stateless. You don’t really have to worry about storage things, but when I think about this I think about what a security gaining is, right. With my background in security, you’re not SSH-ing the boxes anymore, but how can the attacker get into your environment when it only lives for a half second, right? It’s impossible to target, your code just runs and disappears, runs and disappears. It does not stay there. They, there's no public endpoint for them to target. It’s just going to run and execute.
I started talking about event driven computing before and what does that actually mean? Well since one of the tenets of the serverless manifesto is, no idle time, your going to wait for an event to happen. That event could be a database update, maybe a field that gets updated and you want to launch some sort of function to start some sort of HR process. Next could be a request to an API endpoint. We've talked about API gateway a lot, that’s very common or just changes of resource state. There are states in the AWS for instance like EC2 health. Maybe competencies loses a health check so you want to cut a TT or send to slack, Lambda can just listen to that event constantly, 24/7 and respond when that even actually occurs. Finally it can call, any AWS back end server or third party API.
What are some of the realtime models you’ll see? Well you, we have the A sync events like with Amazon S3, which is center of vacation. Maybe you have a service where you’re uploading a photo to S3 and what you, when that photo gets uploaded you want to automatically invoke a function to remove like geo tagging metadata right? You don’t actually have to, Lambda will just knows as soon as that photo gets updated a notification comes and it will run that job for you and then again go dormant.
Syncretist also works like maybe you’re using Amazon Elixir to make a voice bot, you're having a back and forth communication and there's also the streaming pool model. We have Amazon kinesis, which is all about streaming data, Lambda will just continually listen to that stream, look for something new in that stream and then perform those processes for you.
Now AWS has many services these days I cant even keep track, there’s many events in them. We have things like cloud watch events, but what else can call AWS Lambda natively in AWS? Well we talked about S3 and Dynamo already. We have things like AWS Cloud Formation, maybe once you deploy that new stack for your cloud formation, when it's completed, you want to use Lambda to update your back end CRM or like service now or anything like that. Maybe once you commit new codes, code commit which is like our managed get, you want to form a function there.
Then we also have things like, you know good old fashion Cron, you know. Cron was usually dependent on a single server, but now when it's highly distributed you don’t have to worry about a single server going down. Maybe you have every day you run a Cron job to take a snap shot of all your EBS volume so you have a very easy to use back up solution. Of course, with Amazon Elixir, you can also just make a chat or voice bot with that and you know we’re going to keep adding more and more events. These are the most popular ones as well.
I talk about all these great stuff to do with AWS Lambda. I keep saying hey it's easy, let's write some code and run it, but then I talked about like 80 different geographic pops. I talked about Dynamo Db, I talked about Amazon S3. That sounds like a long time to set something up and yes, that could be a long time if you’re going to constantly do it by manually. It makes zero sense if your elegant Lambda function is 50 or 100 lines of code and you have to spend hours deploying the infrastructure or your cloud formation template to deploy that is thousands of lines, right? That’s not really gaining in user developer efficiency.
We open source or read something called SAM the serverless application model and this is supposed to support the community at large. Honestly it's just AWS Lambda and AWS API gateway, but anyone could use this model in their server less stacks. We want to be pretty universal about it, so it’s a very concise short template language that includes just your compute, your storage and like say a Dynamo Db back end, you just describe how you want it. What you essentially do is upload that of your code into the Lambda service and it deploys it for you and manages it for you and you can run that way.
It really takes that pain out of the deployment and let's you focus on your code, let's you be part of devops team and not have to have very large supporting infrastructure to make writing code super easy. We always get asked like, "Whats the best way to build a serverless application or a serverless API endpoint? How many Lambda functions should I have? Do I need one big one, one little one?" Using a monolithic application, never mind the serverless is probably a bad idea. You don’t want to have one monolith per API HTTP method. You kind of want to break it out to what we call Nano services, right. You have Amazon API gateway right there, as you can see each HTTP API method gets puts deletes has it's own Lambda function behind it and what value do you get from that?
Well first of all it can scale independently, it can bill independently right? Perhaps you want to charge more for deletes than you do for puts and you got to have code do that or maybe you want to have this always in production, but you only need to update one part of your API, right? You have code for that, you don’t have to affect anything else. You don’t have to take everything down, you don’t have to like sweep and change. Or maybe you have like one large API and you split it out and a whole other team owns one part of the API and they want to own their own code they don’t want to share it with your team right? Your team owns that code, that way many different teams can work on the same API, own their code, own the scaling for that. That’s really what we like to call Nano services and so really a good paradigm to use when using, you know serverless functions.
The one thing I haven’t talked about yet or in this diagram is you don’t see anything about identity or the user on the inside. If you really want to build a feature rich app using identity, you have Okta’s identity cloud which API gateway go right there. When you marry our serverless infrastructure and combine that with Okta’s identity cloud as for API access management, as a developer you really have this strong suit of tools that where you can just focus in your code and run it. I think that’s all we have right now, but we’re going to be taking questions outside, Tom?
Tom Smith: Sure, yeah. We're going to wrap it there, but if you have questions Patrick and I will be over in the developer lounge here for the next few minutes. Thank you everybody for coming today, I really appreciate it.
Okta's API Access Management helps developers and IT leaders build, maintain, and scale seamless, personal, and secure experiences across on-prem and cloud services. Your users arrive at your portal from a variety of environments, and with a variety of contextual information: native mobile, authenticated to Active Directory, desktop web browser, federated, social login. See how using Okta’s API Access Management along with Amazon Web Services’ API Gateway and Serverless solutions ensures a zero-friction authentication & authorization experience for your users, while securing your API. Session will include an overview of Okta’s API Access Management, an architectural overview, a live demo illustrating a step-by-step walkthrough of the end-user experience, and an overview of Amazon Web Services' Serverless architecture.