Oktane20: What's New With OAuth and OIDC?
Transcript
Details
Aaron Parecki: Hi, everybody. Thanks so much for coming. I am Aaron Parecki, Senior Security Architect at Okta, and I am super happy to be here today and talk to you all about what is new in OAuth and OpenID Connect. First, I want to give you a little bit of background on where this work happens and how these standards organisations work. You may not know this, but OAuth and OpenID Connect are actually made in two different standards organisations. OAuth is under the IETF, which is the Internet Engineering Task Force, which is responsible for things like HTTP, TLS, all the sort of internet stack side of things, and the OAuth group is one of the groups under that organisation. OAuth started out as a community effort and then was sort of brought into that group over time, and that's where all the new work is being done, and that's where it lives now.
OpenID Connect is actually part of the OpenID Foundation, which is a completely separate group, although there's a lot of overlap in people who are members of both. The OpenID Foundation also has a bunch of subgroups of more specific problems that people are working on within OpenID, and OpenID Connect is one of the things that the OpenID Foundation publishes. Now, OpenID Connect itself is actually based on top of the OAuth framework, so they're definitely closely related, but technically, totally separate organisations.
One of the things that I think is the most surprising or confusing about this is that there's actually very low barriers to entry to participate in the development of these standards. People often have this image of these standards organisations as these organisations where people just decide how the internet is going to work and mere mortals can't participate. But it turns out that actually, you can just show up and start talking and get involved. So OAuth, under the IETF, has no formal membership at all. There's no membership fee, there's nothing to sign to join. Pretty much, you just start joining a mailing list and then showing up at events, and you're part of the group.
So the OpenID Foundation is similar. There's no fee or membership to participate in the groups. There is the OpenID Foundation, which companies can become members of and there is a fee associated with that, but you don't actually need to pay that fee or be a member in order to participate in the development of the standards. So both are actually very accessible, and if you want to participate in the development of these standards, you definitely can. So both of these groups are actually a lot more accessible than you might think at first.
So let's get into some of the new things that are coming out of the group. There's a lot of OAuth and OpenID Connect that are completely stable now and a lot of your interactions every day on the internet are based on top of these technologies. But there are a handful of new things coming out of these groups to address some of the new use cases of things that maybe weren't thought of 10 years ago when these specs were started. OAuth itself isn't even just one spec. It's actually a collection of a lot of different specs. Part of the reason for that is because 10 years ago, when OAuth was started, things like mobile phones weren't very popular yet and it wasn't clear that single page apps were going to be the way that a lot of web apps were developed in the future.
There's a whole new kind of device now as well. Things like the Apple TV, where there's no browser on the device to interact with. So over the years, these specs have grown and grown, and been added onto over time with new RFCs, replacing bits in other RFCs, and adding features and it's grown a lot. So there's a lot to it even just what's there right now. Two of the most recent RFCs published under the OAuth group are mutual TLS and resource indicators. But before I get to these, I want to back up and talk about the lifecycle of specs within the ETF. So here's a very simplified summary of the spec development life cycle.
An individual contributor like myself would write up a first draft of something that they think would be useful for a particular working group to work on. So in my case, I wrote up an initial draft of a spec called OAuth for browser-based apps. Then I would submit it to the IETF data tracker with a name like, "draft-my name-the subject." Then I'd bring it into the group, discuss it on the mailing list and sort of get feedback on this initial version. This might go through a couple iterations of changes as just an individual draft. At this point, the draft is not officially a working group item. It has no official standing within the group whatsoever.
Eventually, the chairs of the working group will make a call to officially adopt this as a working group item. If enough people agree, then that draft is officially adopted and it becomes a working group item. At that point, it's renamed to, "draft-working group-subject." Now at this point, work continues on that draft usually goes through several more iterations, sometimes several dozen more iterations and eventually, once everybody agrees on the wording in the final document is then published as an RFC, and that's where it gets its RFC number.
So I first want to talk about some of the most recently published RFCs in the OAuth working group and that's mutual TLS and resource indicators. Mutual TLS, RFC8705 is a form of proof of possession in OAuth terms. This idea of proof of possession has been one of the things that has been really challenging an OAuth for a long time. In OAuth 1, there was this idea that every request with an access token had to also be signed with the client secret of the application. Now, that doesn't actually work that well because in the case of mobile apps, you can't really use a client secret. I have some other videos on the Okta developer YouTube channel that talk about this, if you're interested in this particular problem. But one of the benefits of a proof of possession kind of token is that even if somebody steals the access token, they can't use it because they don't have the rest of the part of the data to make the request. Historically, that would have been a client secret.
So the idea with mutual TLS is that not only does the client validate the certificate of the server, but the server actually goes and associates the certificate of the client with the token issued. What that means is that when the client is issued an access token, that access token is associated with the certificate that the client has and the client has to always present that same certificate when it goes and makes requests with the access token. That protects the access token from being stolen because even if someone did steal the access token, they wouldn't have been able to steal it the certificate itself because the certificate is never actually sent over the wire. It's always used to sign the request. So this is a nice way to avoid the problem of stolen access tokens and make bearer tokens much more secure.
I should also mention that there are any number of different ways you could do this sort of proof of possession concept. There are a couple of others in the works in the OAuth group itself right now, which I'll talk about in a little bit. Okay. Moving on to resource indicators. So resource indicators are useful when you have multiple resource servers that maybe aren't necessarily part of the same system. So with Google, for example, there's a YouTube API, there's a Google Contacts API, there's a Gmail API. So all of those different APIs are resource servers behind the Google brand, but they really don't share much in common other than the fact that they are Google branded. When an application wants to go use an access token at the YouTube API, the Google Contacts API has really nothing that it should be able to do with that token and it shouldn't ever see it.
So the idea with resource indicators is it's a way for the client to tell the authorisation server, "I only need an access token to go and talk to this particular resource server, and I'm never going to bother sending it to any of these other ones." What this does is it lets the authorisation server create the access token in a way that maybe only the one resource server that will be used that can actually decrypt. So you can imagine if you're encrypting data in the access token, you may not actually want your other resource servers to be able to decrypt the data in it at all. If you're thinking this sounds a little bit like the OAuth concept of scope, you're right, it is very similar. But OAuth scope is a completely unstructured set of strings and you could maybe use scope for this, but it's really not what it was meant for and it would be kind of like an overloading of that use of scope. So instead, this is a specific mechanism to do exactly this in a consistent way across multiple implementations.
Okay. So those were the two most recently finished RFCs. Now, let's talk about some of the in progress work in OAuth. OAuth 2.0 Security Best Current Practice is a collection of a lot of different attacks and descriptions and recommendations for how to do OAuth in a secure way. Now, you could arguably say that this should have all been part of the original OAuth draft itself and you're absolutely correct. The reason it was not part of that was because nobody had thought of it yet. This is how these things tend to go. You'll have an initial RFC, RFCs can't be updated themselves. So instead later, a new document will come along and replace and update and modify some of the original RFCs. So this document is an attempt harden and change some of the original OAuth spec to make it more secure for clients.
This document is not only a list of a lot of common attacks and a description of how to solve them, but also has some concrete recommendations, which we're going to look at now. One of the first things that it says is that every client that's doing the authorisation code flow in OAuth should also do PKCE. PKCE was a mechanism originally intended to protect mobile apps doing the authorisation code flow. It turns out, it actually has a couple of interesting properties that make it useful for every type of client. So really, everybody should just be doing this always. PKCE is a mechanism that protects the authorisation code in the redirect. In the traditional authorisation code flow, the authorisation server issues the authorisation code, sends it in the URL back to the user's browser, the user's browser delivers it back to the application and the application then exchanges for an access token.
It's bit of a little bit of a round trip there, but the reason for that is so we can actually get the user in front of their computer and authenticate them there at OAuth server. When the OAuth server issues that temporary authorisation code, it's handing it off to the user's browser in what we call the front channel. The problem with that is that the OAuth server can never actually be sure if it landed back at the application or not. It's kind of like throwing something over a wall and just hoping the person on the other side is going to catch it. You can never really know for sure because you can't see over the wall. So the OAuth server throws us the authorisation code over the wall and then someone brings it back to it to go get an access token. Now, when the OAuth server goes and issues us a access token, it would very much like to know that it is actually the right application that's bringing this code back.
Normally to solve that, we can use the client secret because only the real application has the client secret. The problem is for mobile apps and JavaScript apps, we can't use a client secret. So we don't have any way to protect that part of the flow. PKCE comes in and solves this. The way PKCE works is the application, when it goes to start a flow, actually generates a new secret on the fly right then. It then sends a hash of the secret in the first request, and then when it exchanges the authorisation code for the access token, it has to also prove that it controls that hash by providing the original secret. That step links up the front channel and the back channel so that the OAuth server knows it's giving the access token to the right client.
I go into more detail and PKCE in some of my other talks on YouTube. Go check out the Okta developer channel and you'll find them there. All right. The next thing that the OAuth Security Best Current Practice says is that the password grant must not be used. Straight up, just it's taken out of the spec. There's a couple of really interesting reasons for this. The password grant was originally added to OAuth as a migration strategy for legacy applications to upgrade to OAuth. You might imagine that before OAuth, the applications were storing users' passwords locally and then using them in API requests over HTTP basic auth. That's obviously terrible, and the password grant was meant to upgrade to OAuth where the app could take that stored password and then go and use it to get an access token, drop password and use the access token from there out.
In practice, that's not quite what ended up happening. What actually ended up happening was that people saw the password grant in the spec and said, "Oh, that means I can put a password prompt in my application and use that to go get an access token." That wasn't the goal of putting the password grant in the spec to begin with. So now, the Security Best Current Practice is taking it out. There's a couple of very concrete problems with applications using the password grant. One, it exposes the username and password of the user to the application, which is obviously what we were trying to avoid with third party applications. So you would never want a third party up to use the password grant. But for first party applications, there isn't necessarily that trust issue with handing off the password to the application. However, even for first party apps, it does still increase the attack surface of your system as a whole. If you have password dialogues in a whole bunch of different places, that gives attackers a whole bunch of different places to try to start siphoning off passwords.
Another problem is it trains users that it's okay to enter the password in a bunch of different places. It's hard enough to get users to understand best practices for online security at all. But if we're going and telling them that like, 'Go ahead and enter your password in all these different dialogues," that's not the best story. Really what we need to be doing is training users that, "This is what the password prompt looks like in this system. Don't enter your password anywhere else, otherwise it might be a phishing attack." By putting password prompts in a bunch of different places, in a bunch of different styles, you are not helping the users understand that concept at all.
Another problem with the password grant is it's difficult or sometimes impossible to add multifactor authentication to it. So the way the password grant works is the application just takes the password and exchanges it for an access token. There's nothing in that process where you could add another multi-factor step. You could start extending it with custom code and figure out how to go and send an SMS to the user and then have them type it in and maintain that session and whatever, but you're on your own if you're doing that and who knows what problems are you going to run into? The other problem with that is that for the more secure two factor auth methods like WebAuthn, it's impossible to extend the password grant to include that if your applications are running in multiple domains. The Web Authen credentials are tied to a domain in the browser, so different parts of your application will be seeing different websites and credentials even for the same user.
So instead, if you always do the authorisation code flow and send the user over to the OAuth server, that is where they would manage their two factor off. It's always tied to the same domain. It's also only one place they're going to enter passwords, and it's one place where you can go and upgrade to new multifactor options later by only having to change it in one place. So that's all just to say that there are so many problems with the password grant. It's just not a good plan going forward and we really shouldn't be using anymore. So the OAuth Security Best Current Practice is effectively taking it out of the spec. Another thing the OAuth Security Best Current Practice says is that the implicit flow is no longer useful either, and instead, you should use the authorisation code flow, of course, with PKCE.
The implicit flow has a similar story with the password grant where it had a purpose originally and now, it just doesn't really have a purpose anymore. It was originally part of OAuth because JavaScript apps could not make cross domain post requests, and so they couldn't do the authorisation code flow. The implicit flow was created as a compromise so that browser-based apps could do OAuth when it was published. That was 10 years ago and things were a lot different now. There are much better APIs now in browsers to do things like cross domain post requests that's cross origin resource sharing. There's also now the session history management API where browsers can have nice URLs for single page apps and not have to use the fragment with the hash sign. All these changes have happened in browsers over the last 10 years and the implicit flow just doesn't really serve a purpose anymore. It also has the problem where the authorisation server issues the access to over the front channel, and then can't actually be sure that it was received by the right application.
With the authorisation code flow, we can patch that up using either the client secret or PKCE. But with the implicit flow, the authorisation server generates an access token and just kind of throws it up in the air and who knows where it went? Hopefully, it landed at the application, but OAuth server has no way to actually confirm that. So because of that pretty glaring problem, and it doesn't actually provide any benefits anymore, again, the Security Best Current Practice is saying, "Let's just not use this anymore. Instead, use PKCE." A couple of other things that Security Best Current Practice says it now requires that when redirect URIs are checked, that it uses exact string matching. The OAuth core spec actually allows for things like wildcards in redirects, which opens up a whole bunch of new problems. So this is locking that down and saying, "Whatever URL you registered as you redirect URI has to be the one that's used in a request."
Of course, multiple redirect URLs are still allowed. They just all have to be registered and matched with exact string matching. The OAuth 2.0 bearer token spec allows access tokens to be sent in a query string. That would be in the address bar in a get request. This has a bunch of problems as well, so the OAuth Security Best Current Practice says, "That is no longer an acceptable way to send access tokens." Again, that was there because JavaScript apps needed to be able to put that in a URL to make get requests because they couldn't necessarily send headers. But now, browser APIs have improved a lot and you actually can include the authorisation header in a request from JavaScript. There was another change to refresh tokens and refresh tokens now have to either be sender constrained or one time use.
Sender constraint just means that the refresh token can't be the only piece of information required to use it. You also need to include something like either a client secret or using mutual TLS. That way, refresh tokens can't be useful if they're stolen. If that's not an option, such as if you're in a browser based app, then they need to be one time use. That allows the system to deactivate refresh tokens if they're used more than once, which might indicate that one of them has been stolen. So that's just a few things that are in the Security Best Current Practice. There's a lot more details in there including, as I mentioned, a whole list of different kinds of attacks and how to solve them, so it is worth a read. Let's now talk about the one I'm writing: OAuth for browser-based apps.
This spec is meant to be a compliment to the OAuth 2.0 for native apps, which is an RFC published several years ago. The idea is that browser-based apps, specifically single page apps written entirely in JavaScript, they have a couple of different considerations that are unique to them that are not applicable to native apps or traditional web server based apps. So this document is meant to capture best practices for building single page apps using OAuth. One of the things it does is it describes a couple of different structural patterns for building these apps, but it also has a couple of concrete recommendations, mostly just referencing things in the Security Best Current Practice like always using PKCE, not using the password grant, not using the implicit grant and again, rotating refresh tokens.
The spec describes a couple of different architectural patterns for deploying single page apps, things like when the application is served from a static web host where the JavaScript code itself is the client and accessing both the OAuth server and the resource server. That's, for example, very different from the situation where the single page app actually is backed by a dynamic web server where that can actually be considered a confidential client and that's the one managing the OAuth tokens. Both of these specs are still in progress. They are not finalised and everything you see in them is still up for debate. So if you have any feelings about anything I just talked about, please do join the mailing list and feel free to chime in and share your opinions.
So now, I want to talk about some of the new OAuth extensions that are currently in the works. These are still very much experimental and they were recent additions to the working group. Some of these new extensions actually came from OpenID Connect. These were developed in some of the working groups within the OpenID Foundation, and now they're being brought back into the OAuth group to be put into the core spec. That's how a lot of this work happens is you'll end up with these small groups developing things just for a particular purpose, and then they realise that it's actually broadly applicable to a wide range of people and they'll bring it back into the OAuth core group and develop it there. Then more people can build off of that work in the future.
These are also still very much in progress. So again, if you have any thoughts or feelings about these, this is the time to chime in because the discussion is happening right now. So let's start with a JSON web token profile for access tokens. This back is based on the fact that many implementations currently use JSON web tokens as access tokens. It's a very common pattern to use JSON web tokens as access tokens when your OAuth server is a separate component different from your APIs and applications. Technically, OAuth access tokens can be any format. They don't even have to be a structured token like a JSON web token. But JSON web tokens give us a nice pattern and good library support for validating them and putting data inside of them. So this standard is meant to capture that best practice across all of the existing implementations that use JSON web tokens for access tokens, and define consistent vocabulary and best practices and security recommendations when using JSON web tokens.
Okay. Rich authorisation requests, this is a fun one. The idea with this one is that the OAuth concept of scope is actually very limiting. Scope works fine when you were talking about these sort of broad ranging types of access and application wants access to your contacts. It wants access to your photo albums. It wants to be able to upload videos. That's all stuff that can be described with OAuth scope. What can't be described with OAuth scope is this application is requesting to move $10 from this account or this account, or this application is trying to initiate a payment from this person to this person. The reason OAuth scope doesn't work for that is that typically, scopes are pre-registered as just fixed strings and they're also not structured. So this is the idea where this authorisation requests needs more structure than the concept of scope can provide.
This is an example of a rich authorisation request that would be included along with the authorisation request the app is making. You'll notice this as now a deeply nested JSON structure that has a lot of different properties here, things like defining the type of this request or where this access token will be used. It's now got a dollar amount, a currency, it might even have a bank account info in it. All of these things can be added to a rich authorisation request, and this is a framework for describing these requests. So this really opens up some pretty cool new things that OAuth can't really do today, but of course, you might notice that you would never want to put things like an account number in an address bar. You probably don't want to put a dollar amount in there either because that would let either the user or an attacker change that value as it's sent to a user.
So to solve that part of the problem of we want to now put more information into the authorisation request, we now have this new extension called pushed authorisation requests. So instead of putting all the information about the authorisation request in a URL and getting the user to visit that URL, the application first pushes that data to the OAuth server and then gets back a reference to it, and then sends the user to that URL instead. This reduces the reliance on the front channel. The front channel is the thing that's very hard to protect and we can never really be sure that it isn't being attacked. This moves that authorisation request data to the back channel. So it's even useful for traditional OAuth where you've just got the redirect URI, client ID, things like that, but it's especially useful when it's combined with rich authorisation requests that might include now, more sensitive information or information that you really don't want the user to be able to modify.
Any data set in the front channel runs the risk that either the user modifies it or an attacker might be able to see that data, such as a browser extension running in the browser. So the idea with pushed authorisation requests is that it reduces the reliance on the front channel and moves that entire request to the back channel, making it much more secure and less prone to these attacks. This is a simplified version of what this might look like. Instead of a get request with all the data in the address bar, the application first sends a post request with the same data to the OAuth server and then gets back a URL, which you can then use in the request that it sends to the user's browser. I'm really excited about this because it really cleaned things up a lot. It opens up the possibility for some really cool new stuff in the future.
All right. The next one is JSON web token authorisation requests, also called JAR, noticing a pattern here. So this one is the idea that that authorisation request could actually be a JSON web token. This would provide a way to sign that data that goes in the request so that the authorisation server knows it wasn't tampered with, even if it's sent in the front channel. Of course, this does mean that the user or an attacker could still see the data in the request, but at least they now can't tamper with it because the signature would break. The other thing this does though, which is important in certain cases, is it also acts as an authentication that the client did in fact, initiate this request.
Right now, you can go take any public client ID that you can find from any application and just initiate an OAuth request from anybody. The problem is that anybody can initiate an OAuth request on behalf of any client ID because that request part is not authenticated right now. By using a JSON web token for the authorisation request, only the real client can actually create the JSON web token and sign it properly so then the OAuth server knows that it was in fact, the right client that is initiating that request. So here's an example of what that looks like. You would put in all the same properties that you would use in a traditional get request authorisation request and instead, now you put it into a JSON object, sign it, and then you use the JSON web token in the authorisation request. That's going to be either in a get request using the traditional front channel initiation of a request, or you can combine it with pushed authorisation requests and push it up to the OAuth server first.
So these are all things that are still pretty experimental, although have broad support within the group. So I fully expect to see these work the way through their process and become RFCs in the near future. So that's a quick summary of things that are happening in the OAuth group. There's a lot more going on as well. I don't have time to get into everything, but these are the ones that I'm also the most familiar with. Now, let's talk about another effort going on in the OAuth group and that is OAuth 2.1. If you take a look at the current OAuth 2.0 landscape, we've got RFC6749 and 6750 published in 2012. Those are the core documents that everything is sort of based on. 6749 is the authorisation framework. That describes things like the authorisation code flow, the implicit flow, the password flow, client credentials, things like that. How do you use refresh tokens? 6750 describes access tokens in terms of bearer tokens.
The theory was that there might be other types of access tokens used that are not bearer tokens. In practice, everybody pretty much has used bearer tokens and now we're just trying to patch those with proof of possession concepts. Over the years, it was discovered that mobile applications are turns out, a huge use case for OAuth who would have thought? And they can't use a client secret, so we have to use PKCE instead. Now, we've got this extension on the side, which is its own RFC7636, that is PKCE, and that describes how to do the PKCE mechanism of generating a secret key that's for each particular OAuth request so that the authorisation code can't be used if it's stolen. That then solves authorisation codes for mobile apps.
On top of that, there's a separate RFC, 8252, which talks about why you need to do a PKCE for mobile apps. It also talks about some of the particular things unique to a mobile environment that don't apply to a web based environment. There's also now the one I mentioned earlier, the browser based app best current practice, and that is not an RFC yet, but it does talk about, again, the things that are unique to browser based environments that are doing OAuth. The other thing that the bearer token RFC describes is how to send access to access tokens in a request. It actually describes three different ways; an HTTP header, using it in a post body payload and also putting it in a query string.
Then the Security Best Current Practice came along and updated a whole bunch of this. It takes out the implicit flow. It takes out the password flow. It says, "Actually, you should use PKCE for every type of client, not just mobile clients and also, now you can't send access tokens in query strings." What's happened is over the years, we've published all these different RFCs building on top of each other, taking away things and if you look at what's left at the end, it's actually a lot less complex than what you might think. Really, if you follow the OAuth Best Current Practice, you end up with an authorisation code plus PKCE is the main redirect based flow. You also have the client credentials flow when there's no user present and tokens can be sent either in the post body or HTTP header. OAuth 2.1 is an attempt to consolidate all the specs in OAuth 2.0, adding the best current practices, removing deprecated features.
The idea is to capture the best current practice in OAuth 2.0 and give it a name. Things that OAuth 2.1 is not doing; it is not defining any new behaviour. In fact, it's not even including some of the additional new extensions like the device flow, which are pretty widely established and have their own RFCs either. It's really just trying to be an update to the core RFC. OAuth 2.1, it starts with RFC 6749, OAuth 2.0 core, it adds bearer token usage since everybody uses bearer tokens, it adds PKCE, it adds the native app and browser-based app best current practices, it adds a Security Best Current Practice, including everything it says, which we covered before using PKCE for everything, no password grant, takes out the implicit flow, uses exact string matching for redirect URIs, no access tokens in query strings and refresh tokens must be sender constraint or one time use.
You can find out more about, OAuth 2.1 at OAuth.net/2.1 and you'll find the link to the current draft there. This is again, an active item in the working group so this is very much up for discussion and I expect to see some changes to the text by the end of this. Lastly, I want to talk about some work that's going on now, which is very experimental, and that's the idea of updating OAuth 2.0 a potential OAuth 3.0. This work has been done in a completely separate group from the OAuth working group. Of course, there's a lot of similar overlapping people just like, OAuth and OpenID Connect, but it is a separate group. Last week at the IETF meeting, this working group had a slot on the agenda. Of course, instead of meeting in person, it was virtual, but that was the official meeting of this new working group.
That meeting was to define the scope of the charter of the group. So this is still very much in progress and this is happening right now. So the idea with OAuth 3.0, although that's not the official name for it right now, is that this is a complete reimagining of OAuth 2.0. If you look at OAuth 2.0 and all the extensions that have been added over time, a lot of the later work, including OpenID Connect, was based on the same foundation, which means it has the same constraints that that original foundation has. You can tell that people are pushing it to the limits and finding where it kind of stops working. The idea with OAuth 3.0 is that if we don't have to be constrained by the original OAuth 2.0 core document, what could we do instead to make things cleaner or more flexible in the future?
Right now, there are two different proposals being talked about. One is called XYZ. One is called XAuth. They work in different ways, but they share a lot of the same goals. Both of these are not intending to be backwards compatible with OAuth 2.0 At all. They may reuse some ideas from OAuth and OpenID Connect, but they are not at all wire compatible with them. In fact, they do often define some new terms as well. But the overall idea is to take all the use cases that are in OAuth and some of the things that people are trying to do that don't really map to OAuth and rethinking how that might work in a new framework. One of the goals there is also to reduce the reliance on the front channel, which is this thing that's been holding back OAuth and we've had to work around it in very creative ways. So for example, in XYZ, every OAuth request is initiated via the back channel by default. It's not just an extension that some clients might do.
If you're interested in reading more about the specific proposal XYZ, you can find more information on that at OAuth.xyz. That's kind of a web based version of the proposal. There's also a draft written up, but the website's a better way to go and explore and see what these ideas are. This work is happening in a group called TXAuth. You can also go to OAuth.net/3 and find links to the new working group there and see the mailing list and join all the discussions. This is a very active topic right now. A lot of people are chiming in with their thoughts and proposals. Everything you see there right now is probably going to change by the time this is published. This is also not a short project. This is going to be a multi year long project.
So don't worry, OAuth 2.0 is not going anywhere. OAuth is definitely here to stay and we've got some pretty cool features coming in the spec as well. So don't worry. It's not like you're going to have to go upgrade to OAuth 3.0 right away. But if you are interested in helping define that and helping participate in the development of OAuth 3.0 or any future work, please do get involved, join the OAuth working group, join the OpenID working group, get involved and have fun. Thank you very much. I'm Aaron Parecki. You can find me on my website aaronparecki.com. You can find copies of my book, "OAuth 2.0 Simplified," at OAuth2simplified.com. Thank you so much for watching and I hope you have a great rest of the day.
In this talk you'll learn about the latest developments with the OAuth and OIDC specs directly from the standards group. The latest additions to the specs enable richer experiences and better security for applications using OAuth.