Responsible AI innovation at Okta
AI is at the peak of the hype cycle. Its exponential growth began in 2022 with ChatGPT being released. While ChatGPT’s freemium model brought its impressive capability to mainstream attention, what drives AI commercialization is the decreasing barrier of entry.
Large-scale, open-source models based on readily available APIs are constantly being released. Developers can easily adapt and redistribute these models for their own use cases without extensive computational power or training data. Across boardrooms, the question isn't whether to adopt AI, but how responsibly and swiftly.
Innovation is ingrained in Okta’s DNA. We continuously push boundaries to stay at the forefront of technology advancements to protect our customers and empower them to safely use any technology. Adopting AI into our engineering practice isn't an exception; it's integral to our strategy. So is Okta’s value-based approach to AI.
From experiments to production: Safely increasing the velocity of AI innovation
We built a strong team of data scientists and machine learning (ML) engineers who deliver numerous cutting-edge AI products, including the pioneering Identity Threat Protection. The seed of our innovation was planted a couple of years back when a group of Okta engineers gathered to create what would become the Okta AI/ML Guild.
Our diverse members, from Engineering to Strategy to Field teams, believe that all Oktanauts should strive to be at least aware of new technologies because a lack of attention to emerging technologies could become a fatal shortcoming for innovation.
Since then, we have grown a community for our Oktanauts to discuss a wide range of topics, from a conceptual grasp of AI and AI-enabled technology to experiments, and we weave the learnings into our products and services. We’ve invited leaders in AI innovation from universities, large tech companies, and startups to share their learnings and best engineering practices.
Prioritizing security and compliance for Okta and our customers
As part of Okta’s core value of “Drive What’s Next,” we encourage our engineers to try emerging technologies, and we stay laser-focused on how to use those technologies safely. We’ve developed internal guidelines to help our developers responsibly and securely engage with AI.
Our developers adhere to responsible AI principles regarding privacy, security, responsible innovation, and more general principles and obligations regarding customer data. For example, in our hackathons, with clear guardrails in place, teams can still effectively demonstrate the potential of AI in a hypothetical manner.
As we engage with GenAI model-as-a-service platforms, we work to ensure appropriate data handling safeguards are in place. Earning and honoring customer trust is our priority in our approach to emerging AI technologies. We practice rigorous risk management, even at the earliest stage of product incubation.
Setting standards for responsible AI development
We have a collective responsibility to be mindful of the potential harms and unintended consequences of new technologies. At Okta, it’s of the utmost importance to build systems that are functional and robust but also safe, secure, and inclusive.
We aim to align with international human rights standards. Okta’s Engineering, Human Rights, and Engineering teams work together to understand, and to incorporate, respect for human rights like privacy, avoiding bias, and safety through collaboration with internal and external human rights experts.
Okta’s Responsible AI principles underscore (i) transparency; (ii) building customer trust through security, privacy, and safety; (iii) accountability; and (iv) innovating responsibly regarding inclusivity, fairness, and ethics. These principles are aligned with Okta’s values: “Love our customers.” “Always secure. Always on.” “Build and own it.” “Drive what’s next.”
“What's next” for Okta AI
Okta AI is a work in progress as we improve our customer productivity and security and redefine what it means to be integrated and protected. Launching Identity Threat Protection on the Workforce Identity side and Auth for GenAI on the Customer Identity side are the first steps in helping our customers understand the activity on their applications, diagnose problems, and defend other customers from various threat vectors.
With our Telephony Anti-Toll Fraud System, Okta can spot attempts by threat actors using a custom ML model to protect organizations from fraud and excess telephony costs. Attack Protection provides defense against a range of attacks, with features like Bot Detection, as well as other security capabilities including Breached Password Detection, and Suspicious IP Throttling.
Log Investigator, an upcoming AI capability powered by natural language processing (NLP), lets Okta admins inquire about Okta’s System Log data in everyday language. This capability provides insights into the historical context of their Identity security, simplifying the detection of unusual or suspicious activities.
Our team is working to expand our knowledge base on AI’s impact on Identity and Access management. With the upcoming Governance Analyzer with Okta AI, announced at Oktane 2024, customers will be able to leverage Okta’s vast datasets – from device posture to relationship data to past governance decisions — to get critical insights and recommendations to get authorization right. We’ll continue sharing our thought leadership and practices and learning from our peers through events like industry groups centered on responsible AI practices and innovation. Learn more about Okta’s Responsible AI Principles, Okta AI, and our research on AI at Work.
Have questions about this blog post? Reach out to us at [email protected].
Explore more insightful Engineering Blogs from Okta to expand your knowledge.
Ready to join our passionate team of exceptional engineers? Visit our career page.
Unlock the potential of modern and sophisticated identity management for your organization. Contact Sales for more information.