Misplaced Pages

Learning Tools Interoperability

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
#442557

59-472: Learning Tools Interoperability ( LTI ) is a standard developed by 1EdTech formerly known as IMS Global Learning Consortium at the time of creation. It enables seamless integration between learning systems and external systems. In its current version, v1.3, this is done using OAuth2 , OpenID Connect , and JSON Web Tokens . For example, a Learning Management System (LMS) may use LTI to host course content and tools provided by external or third-party systems on

118-450: A serial number . Once assigned a number and published, an RFC is never rescinded or modified; if the document requires amendments, the authors publish a revised document. Therefore, some RFCs supersede others; the superseded RFCs are said to be deprecated , obsolete , or obsoleted by the superseding RFC. Together, the serialized RFCs compose a continuous historical record of the evolution of Internet standards and practices. The RFC process

177-593: A session fixation security flaw in the 1.0 protocol was announced. It affects the OAuth authorization flow (also known as "3-legged OAuth") in OAuth Core 1.0 Section 6. Version 1.0a of the OAuth Core protocol was issued to address this issue. In January 2013, the Internet Engineering Task Force published a threat model for OAuth 2.0. Among the threats outlined is one called "Open Redirector"; in early 2014,

236-638: A web site without requiring a learner to log in separately on the external systems. The LTI will also share learner information and the learning context shared by the LMS with the external systems. LTI has been adopted by many large educational content providers, including Pearson and McGraw Hill . Popular Learning Management Systems, such as D2L Brightspace , Instructure Canvas, Blackboard , BenchPrep , LAMS , OpenLearning , Sakai , Moodle , Totara, iTeach , EduWave K-12 and Open edX also support LTI. OAuth2 OAuth (short for open authorization )

295-473: A "whole new frontier to sell consulting services and integration solutions". In comparing OAuth 2.0 with OAuth 1.0, Hammer points out that it has become "more complex, less interoperable, less useful, more incomplete, and most importantly, less secure". He explains how architectural changes for 2.0 unbound tokens from clients, removed all signatures and cryptography at a protocol level and added expiring tokens (because tokens could not be revoked) while complicating

354-488: A colleague, employer or friend wanting to share a document on Google Docs. Those who clicked on the link within the email were directed to sign in and allow a potentially malicious third-party program called "Google Apps" to access their "email account, contacts and online documents". Within "approximately one hour", the phishing attack was stopped by Google, who advised those who had given "Google Apps" access to their email to revoke such access and change their passwords. In

413-522: A common set of terms such as "MUST" and "NOT RECOMMENDED" (as defined by RFC  2119 and 8174 ), augmented Backus–Naur form (ABNF) ( RFC  5234 ) as a meta-language, and simple text-based formatting, in order to keep the RFCs consistent and easy to understand. The RFC series contains three sub-series for IETF RFCs: BCP, FYI, and STD. Best Current Practice (BCP) is a sub-series of mandatory IETF RFCs not on standards track. For Your Information (FYI)

472-431: A kind of "valet key" that the application can include with its requests to the identity provider, which prove that it has permission from the user to access those APIs. Because the identity provider typically (but not always) authenticates the user as part of the process of granting an OAuth access token, it is tempting to view a successful OAuth access token request as an authentication method itself. However, because OAuth

531-555: A new submission which will receive a new serial number. Standards track documents are further divided into Proposed Standard and Internet Standard documents. Only the IETF, represented by the Internet Engineering Steering Group (IESG), can approve standards-track RFCs. If an RFC becomes an Internet Standard (STD), it is assigned an STD number but retains its RFC number. The definitive list of Internet Standards

590-455: A secured Google Site could not have been accessed using Google Reader . Instead, three-legged OAuth would have been used to authorize that RSS client to access the feed from the Google Site. OAuth is a service that is complementary to and distinct from OpenID . OAuth is unrelated to OATH , which is a reference architecture for authentication, not a standard for authorization. However, OAuth

649-572: A similar fashion; BCP n refers to a certain RFC or set of RFCs, but which RFC or RFCs may change over time). An informational RFC can be nearly anything from April 1 jokes to widely recognized essential RFCs like Domain Name System Structure and Delegation ( RFC  1591 ). Some informational RFCs formed the FYI sub-series. An experimental RFC can be an IETF document or an individual submission to

SECTION 10

#1733086082443

708-424: A variant of this was described under the name "Covert Redirect" by Wang Jing. OAuth 2.0 has been analyzed using formal web protocol analysis. This analysis revealed that in setups with multiple authorization servers, one of which is behaving maliciously, clients can become confused about the authorization server to use and may forward secrets to the malicious authorization server (AS Mix-Up Attack). This prompted

767-586: Is a sub-series of informational RFCs promoted by the IETF as specified in RFC ; 1150 (FYI 1). In 2011, RFC  6360 obsoleted FYI 1 and concluded this sub-series. Standard (STD) used to be the third and highest maturity level of the IETF standards track specified in RFC  2026 (BCP 9). In 2011 RFC  6410 (a new part of BCP 9) reduced the standards track to two maturity levels. There are five streams of RFCs: IETF , IRTF , IAB , independent submission , and Editorial . Only

826-403: Is an authorization protocol, rather than an authentication protocol. Using OAuth on its own as an authentication method may be referred to as pseudo-authentication. The following diagrams highlight the differences between using OpenID (specifically designed as an authentication protocol) and OAuth for authorization. The communication flow in both processes is similar: The crucial difference

885-431: Is an open standard for access delegation , commonly used as a way for internet users to grant websites or applications access to their information on other websites but without giving them the passwords. This mechanism is used by companies such as Amazon , Google , Meta Platforms , Microsoft , and Twitter to permit users to share information about their accounts with third-party applications or websites. Generally,

944-408: Is another of the four first of what were ARPANET nodes and the source of early RFCs. The ARC became the first network information center ( InterNIC ), which was managed by Elizabeth J. Feinler to distribute the RFCs along with other network information. From 1969 until 1998, Jon Postel served as the RFC editor . On his death in 1998, his obituary was published as RFC  2468 . Following

1003-420: Is directly related to OpenID Connect (OIDC), since OIDC is an authentication layer built on top of OAuth 2.0. OAuth is also unrelated to XACML , which is an authorization policy standard. OAuth can be used in conjunction with XACML, where OAuth is used for ownership consent and access delegation whereas XACML is used to define the authorization policies (e.g., managers can view documents in their region). OAuth

1062-413: Is documented in RFC  2026 ( The Internet Standards Process, Revision 3 ). The RFC production process differs from the standardization process of formal standards organizations such as International Organization for Standardization (ISO). Internet technology experts may submit an Internet Draft without support from an external institution. Standards-track RFCs are published with approval from

1121-502: Is obsoleted by various newer RFCs, but SMTP itself is still "current technology", so it is not in "Historic" status. However, since BGP version 4 has entirely superseded earlier BGP versions, the RFCs describing those earlier versions, such as RFC  1267 , have been designated historic. Status unknown is used for some very old RFCs, where it is unclear which status the document would get if it were published today. Some of these RFCs would not be published at all today; an early RFC

1180-552: Is submitted as plain ASCII text and is published in that form, but may also be available in other formats . For easy access to the metadata of an RFC, including abstract, keywords, author(s), publication date, errata, status, and especially later updates, the RFC Editor site offers a search form with many features. A redirection sets some efficient parameters, example: rfc:5000. The official International Standard Serial Number (ISSN) of

1239-457: Is that in the OpenID authentication use case, the response from the identity provider is an assertion of identity; while in the OAuth authorization use case, the identity provider is also an API provider, and the response from the identity provider is an access token that may grant the application ongoing access to some of the identity provider's APIs, on the user's behalf. The access token acts as

SECTION 20

#1733086082443

1298-478: Is the Official Internet Protocol Standards. Previously STD 1 used to maintain a snapshot of the list. When an Internet Standard is updated, its STD number stays the same, now referring to a new RFC or set of RFCs. A given Internet Standard, STD n , may be RFCs x and y at a given time, but later the same standard may be updated to be RFC z instead. For example, in 2007 RFC  3700

1357-660: The Internet Research Task Force (IRTF), and an independent stream from other outside sources. A new model was proposed in 2008, refined, and published in August 2009, splitting the task into several roles, including the RFC Series Advisory Group (RSAG). The model was updated in 2012. The streams were also refined in December 2009, with standards defined for their style. In January 2010, the RFC Editor function

1416-611: The IETF creates BCPs and RFCs on the standards track. The IAB publishes informational documents relating to policy or architecture. The IRTF publishes the results of research, either as informational documents or as experiments. Independent submissions are published at the discretion of the Independent Submissions Editor. Non-IETF documents are reviewed by the IESG for conflicts with IETF work. IRTF and independent  RFCs generally contain relevant information or experiments for

1475-559: The IETF, and are usually produced by experts participating in IETF Working Groups , which first publish an Internet Draft. This approach facilitates initial rounds of peer review before documents mature into RFCs. The RFC tradition of pragmatic, experience-driven, after-the-fact standards authorship accomplished by individuals or small working groups can have important advantages over the more formal, committee-driven process typical of ISO and national standards bodies. Most RFCs use

1534-614: The Internet at large not in conflict with IETF work. compare RFC  4846 , 5742 and 5744 . The Editorial Stream is used to effect editorial policy changes across the RFC series (see RFC  9280 ). The official source for RFCs on the World Wide Web is the RFC Datatracker. Almost any published RFC can be retrieved via a URL of the form https://datatracker.ietf.org/doc/html/rfc5000, shown for RFC  5000 . Every RFC

1593-520: The OAuth 2.0 project, withdrew from the IETF working group , and removed his name from the specification in July 2012. Hammer cited a conflict between web and enterprise cultures as his reason for leaving, noting that IETF is a community that is "all about enterprise use cases" and "not capable of simple". "What is now offered is a blueprint for an authorization protocol", he noted, "that is the enterprise way", providing

1652-452: The OAuth protocol provides a way for resource owners to provide a client application with secure delegated access to server resources. It specifies a process for resource owners to authorize third-party access to their server resources without providing credentials. Designed specifically to work with Hypertext Transfer Protocol (HTTP), OAuth essentially allows access tokens to be issued to third-party clients by an authorization server, with

1711-436: The RFC Editor. A draft is designated experimental if it is unclear the proposal will work as intended or unclear if the proposal will be widely adopted. An experimental RFC may be promoted to standards track if it becomes popular and works well. The Best Current Practice subseries collects administrative documents and other texts which are considered as official rules and not only informational , but which do not affect over

1770-790: The RFC Series Approval Board (RSAB). It also established a new Editorial Stream for the RFC Series and concluded the RSOC. The role of the RSE was changed to the RFC Series Consulting Editor (RSCE). In September 2022, Alexis Rossi was appointed to that position. Requests for Comments were originally produced in non- reflowable text format. In August 2019, the format was changed so that new documents can be viewed optimally in devices with varying display sizes. The RFC Editor assigns each RFC

1829-451: The RFC series is 2070-1721. Not all RFCs are standards. Each RFC is assigned a designation with regard to status within the Internet standardization process. This status is one of the following: Informational , Experimental , Best Current Practice , Standards Track , or Historic . Once submitted, accepted, and published, an RFC cannot be changed. Errata may be submitted, which are published separately. More significant changes require

Learning Tools Interoperability - Misplaced Pages Continue

1888-638: The RFC series to the Network Working Group. Rather than being a formal committee, it was a loose association of researchers interested in the ARPANET project. In effect, it included anyone who wanted to join the meetings and discussions about the project. Many of the subsequent RFCs of the 1970s also came from UCLA, because UCLA is one of the first of what were Interface Message Processors (IMPs) on ARPANET. The Augmentation Research Center (ARC) at Stanford Research Institute , directed by Douglas Engelbart ,

1947-502: The Twitter and Magnolia APIs to delegate authentication. They concluded that there were no open standards for API access delegation. The OAuth discussion group was created in April 2007, for a small group of implementers to write the draft proposal for an open protocol. DeWitt Clinton from Google learned of the OAuth project, and expressed his interest in supporting the effort. In July 2007,

2006-595: The approval of the resource owner. The third party then uses the access token to access the protected resources hosted by the resource server. OAuth began in November 2006 when Blaine Cook was developing the Twitter OpenID implementation. Meanwhile, Ma.gnolia needed a solution to allow its members with OpenIDs to authorize Dashboard Widgets to access their service. Cook, Chris Messina and Larry Halff from Magnolia met with David Recordon to discuss using OpenID with

2065-538: The coarse functionality (the scopes) exposed by the target service. As a result, it often makes sense to combine OAuth and XACML together where OAuth will provide the delegated access use case and consent management and XACML will provide the authorization policies that work on the applications, processes, and data. Lastly, XACML can work transparently across multiple stacks ( APIs , web SSO, ESBs, home-grown apps, databases...). OAuth focuses exclusively on HTTP-based apps. Eran Hammer resigned from his role of lead author for

2124-605: The creation of a new best current practice internet draft that sets out to define a new security standard for OAuth 2.0. Assuming a fix against the AS Mix-Up Attack in place, the security of OAuth 2.0 has been proven under strong attacker models using formal analysis. One implementation of OAuth 2.0 with numerous security flaws has been exposed. In April and May 2017, about one million users of Gmail (less than 0.1% of users as of May 2017) were targeted by an OAuth-based phishing attack, receiving an email purporting to be from

2183-512: The documents "shape the Internet's inner workings and have played a significant role in its success," but are not widely known outside the community. Outside of the Internet community, other documents also called requests for comments have been published, as in U.S. Federal government work, such as the National Highway Traffic Safety Administration . The inception of the RFC format occurred in 1969 as part of

2242-540: The draft of OAuth 2.1 the use of the PKCE extension for native apps has been recommended to all kinds of OAuth clients, including web applications and other confidential clients in order to prevent malicious browser extensions from performing OAuth 2.0 code injection attacks. OAuth framework specifies several grant types for different use cases. Some of the most common OAuth grant types are: Facebook 's Graph API only supports OAuth 2.0. Google supports OAuth 2.0 as

2301-680: The expiration of the original ARPANET contract with the U.S. federal government, the Internet Society, acting on behalf of the IETF, contracted with the Networking Division of the University of Southern California (USC) Information Sciences Institute (ISI) to assume the editorship and publishing responsibilities under the direction of the IAB. Sandy Ginoza joined USC/ISI in 1999 to work on RFC editing, and Alice Hagens in 2005. Bob Braden took over

2360-416: The modern RFCs, many of the early RFCs were actual Requests for Comments and were titled as such to avoid sounding too declarative and to encourage discussion. The RFC leaves questions open and is written in a less formal style. This less formal style is now typical of Internet Draft documents, the precursor step before being approved as an RFC. In December 1969, researchers began distributing new RFCs via

2419-426: The newly operational ARPANET. RFC  1 , titled "Host Software", was written by Steve Crocker of the University of California, Los Angeles (UCLA), and published on April 7, 1969. Although written by Steve Crocker, the RFC had emerged from an early working group discussion between Steve Crocker, Steve Carr, and Jeff Rulifson . In RFC  3 , which first defined the RFC series, Crocker started attributing

Learning Tools Interoperability - Misplaced Pages Continue

2478-660: The principal technical development and standards-setting bodies for the Internet , most prominently the Internet Engineering Task Force (IETF). An RFC is authored by individuals or groups of engineers and computer scientists in the form of a memorandum describing methods, behaviors, research, or innovations applicable to the working of the Internet and Internet-connected systems. It is submitted either for peer review or to convey new concepts, information, or, occasionally, engineering humor. The IETF adopts some of

2537-402: The processing of authorization. Numerous items were left unspecified or unlimited in the specification because "as has been the nature of this working group, no issue is too small to get stuck on or leave open for each implementation to decide." David Recordon later also removed his name from the specifications for unspecified reasons. Dick Hardt took over the editor role, and the framework

2596-658: The program were included the RFC Editor Model (Version 3) as defined in RFC  9280 , published in June 2022. Generally, the new model is intended to clarify responsibilities and processes for defining and implementing policies related to the RFC series and the RFC Editor function. Changes in the new model included establishing the position of the RFC Consulting Editor, the RFC Series Working Group (RSWG), and

2655-410: The proposals published as RFCs as Internet Standards . However, many RFCs are informational or experimental in nature and are not standards. The RFC system was invented by Steve Crocker in 1969 to help record unofficial notes on the development of ARPANET . RFCs have since become official documents of Internet specifications , communications protocols , procedures, and events. According to Crocker,

2714-427: The recommendation to use source filtering to make DoS attacks more difficult ( RFC  2827 : " Network Ingress Filtering: Defeating Denial of Service Attacks which employ IP Source Address Spoofing ") is BCP 38 . A historic RFC is one that the technology defined by the RFC is no longer recommended for use, which differs from "Obsoletes" header in a replacement RFC. For example, RFC  821 ( SMTP ) itself

2773-413: The recommended authorization mechanism for all of its APIs . Microsoft also supports OAuth 2.0 for various APIs and its Azure Active Directory service, which is used to secure many Microsoft and third party APIs. OAuth can be used as an authorizing mechanism to access secured RSS / Atom feeds. Access to RSS/ATOM feeds that require authentication has always been an issue. For example, an RSS feed from

2832-429: The role of RFC project lead, while Joyce K. Reynolds continued to be part of the team until October 13, 2006. In July 2007, streams of RFCs were defined, so that the editing duties could be divided. IETF documents came from IETF working groups or submissions sponsored by an IETF area director from the Internet Engineering Steering Group . The IAB can publish its own documents. A research stream of documents comes from

2891-524: The seminal ARPANET project. Today, it is the official publication channel for the Internet Engineering Task Force (IETF), the Internet Architecture Board (IAB), and – to some extent – the global community of computer network researchers in general. The authors of the first RFCs typewrote their work and circulated hard copies among the ARPA researchers. Unlike

2950-476: The team drafted an initial specification. Eran Hammer joined and coordinated the many OAuth contributions creating a more formal specification. On 4 December 2007, the OAuth Core 1.0 final draft was released. At the 73rd Internet Engineering Task Force (IETF) meeting in Minneapolis in November 2008, an OAuth BoF was held to discuss bringing the protocol into the IETF for further standardization work. The event

3009-399: The user, grant Twitter access to my Facebook wall), and identity-centric authorization, XACML takes an attribute-based approach which can consider attributes of the user, the action, the resource, and the context (who, what, where, when, how). With XACML it is possible to define policies such as XACML provides more fine-grained access control than OAuth does. OAuth is limited in granularity to

SECTION 50

#1733086082443

3068-721: The wider IETF community. Albeit being built on the OAuth 1.0 deployment experience, OAuth 2.0 is not backwards compatible with OAuth 1.0. OAuth 2.0 was published as RFC 6749 and the Bearer Token Usage as RFC 6750, both standards track Requests for Comments, in October 2012. The OAuth 2.1 Authorization Framework is in draft stage and consolidates the functionality in the RFCs OAuth 2.0, OAuth 2.0 for Native Apps, Proof Key for Code Exchange, OAuth 2.0 for Browser-Based Apps, OAuth Security Best Current and Bearer Token Usage. On 23 April 2009,

3127-561: The wire data . The border between standards track and BCP is often unclear. If a document only affects the Internet Standards Process, like BCP 9, or IETF administration, it is clearly a BCP. If it only defines rules and regulations for Internet Assigned Numbers Authority (IANA) registries it is less clear; most of these documents are BCPs, but some are on the standards track. The BCP series also covers technical recommendations for how to practice Internet standards; for instance,

3186-446: Was an Internet Standard—STD 1—and in May 2008 it was replaced with RFC  5000 , so RFC  3700 changed to Historic , RFC  5000 became an Internet Standard, and as of May 2008 STD 1 is RFC  5000 . as of December 2013 RFC  5000 is replaced by RFC  7100 , updating RFC  2026 to no longer use STD 1. (Best Current Practices work in

3245-481: Was moved to a contractor, Association Management Solutions, with Glenn Kowack serving as interim series editor. In late 2011, Heather Flanagan was hired as the permanent RFC Series Editor (RSE). Also at that time, an RFC Series Oversight Committee (RSOC) was created. In 2020, the IAB convened the RFC Editor Future Development program to discuss potential changes to the RFC Editor model. The results of

3304-490: Was not designed with this use case in mind, making this assumption can lead to major security flaws. [REDACTED] XACML is a policy-based, attribute-based access control authorization framework. It provides: XACML and OAuth can be combined to deliver a more comprehensive approach to authorization. OAuth does not provide a policy language with which to define access control policies. XACML can be used for its policy language. Where OAuth focuses on delegated access (I,

3363-452: Was often just that: a simple Request for Comments, not intended to specify a protocol, administrative procedure, or anything else for which the RFC series is used today. The general rule is that original authors (or their employers, if their employment conditions so stipulate) retain copyright unless they make an explicit transfer of their rights. An independent body, the IETF Trust, holds

3422-399: Was published in October 2012. David Harris, author of the email client Pegasus Mail , has criticised OAuth 2.0 as "an absolute dog's breakfast", requiring developers to write custom modules specific to each service (Gmail, Microsoft Mail services, etc.), and to register specifically with them. Request for Comments A Request for Comments ( RFC ) is a publication in a series from

3481-427: Was well attended and there was wide support for formally chartering an OAuth working group within the IETF. The OAuth 1.0 protocol was published as RFC 5849, an informational Request for Comments , in April 2010. Since 31 August 2010, all third party Twitter applications have been required to use OAuth. The OAuth 2.0 framework was published considering additional use cases and extensibility requirements gathered from

#442557