Real Time Streaming Protocol

The Real Time Streaming Protocol (RTSP) is a network control protocol designed for use in entertainment and communications systems to control streaming media servers. The protocol is used for establishing and controlling media sessions between end points. Clients of media servers issue VCR-style commands, such as play and pause, to facilitate real-time control of playback of media files from the server. The transmission of streaming data itself is not a task of the RTSP protocol. Most RTSP servers use the Real-time Transport Protocol (RTP) in conjunction with Real-time Control Protocol (RTCP) for media stream delivery, however some vendors implement proprietary transport protocols. The RTSP server software from RealNetworks, for example, also used RealNetworks' proprietary Real Data Transport (RDT). RTSP was developed by RealNetworks, Netscape[1] and Columbia University, with the first draft submitted to IETF in 1996.[2] It was standardized by the Multiparty Multimedia Session Control Working Group (MMUSIC WG) of the Internet Engineering Task Force (IETF) and published as RFC 2326 in 1998.[3] RTSP 2.0 is currently under development as a replacement of RTSP 1.0. RTSP 2.0 is based on RTSP 1.0 but is not backwards compatible other than in the basic version negotiation mechanism.[4] RTSP using RTP and RTCP allows for the implementation of rate adaptation.[further explanation needed] Internet protocol suite Application layer DHCP DHCPv6 DNS FTP HTTP IMAP IRC LDAP MGCP NNTP BGP NTP POP RPC RTP RTSP RIP SIP SMTP SNMP SOCKS SSH Telnet TLS/SSL XMPP more... Transport layer TCP UDP DCCP SCTP RSVP more... Internet layer IP IPv4 IPv6 ICMP ICMPv6 ECN IGMP IPsec more... Link layer ARP/InARP NDP OSPF Tunnels L2TP PTPP Media access control Ethernet DSL ISDN FDDI DOCSIS more... v t e Contents [hide] 1 Protocol directives 2 Implementations 2.1 Server 2.2 Client 3 References 4 External links Protocol directives[edit] While similar in some ways to HTTP, RTSP defines control sequences useful in controlling multimedia playback. While HTTP is stateless, RTSP has state; an identifier is used when needed to track concurrent sessions. Like HTTP, RTSP uses TCP to maintain an end-to-end connection and, while most RTSP control messages are sent by the client to the server, some commands travel in the other direction (i.e. from server to client). Presented here are the basic RTSP requests. Some typical HTTP requests, like the OPTIONS request, are also available. The default transport layer port number is 554. OPTIONS An OPTIONS request returns the request types the server will accept. C->S: OPTIONS rtsp://example.com/media.mp4 RTSP/1.0 CSeq: 1 Require: implicit-play Proxy-Require: gzipped-messages S->C: RTSP/1.0 200 OK CSeq: 1 Public: DESCRIBE, SETUP, TEARDOWN, PLAY, PAUSE DESCRIBE A DESCRIBE request includes an RTSP URL (rtsp://...), and the type of reply data that can be handled. The default port for the RTSP protocol is 554 for both UDP (used in lower-latency applications where compromised rendering quality may be acceptable) and TCP transports. This reply includes the presentation description, typically in Session Description Protocol (SDP) format. Among other things, the presentation description lists the media streams controlled with the aggregate URL. In the typical case, there is one media stream each for audio and video. C->S: DESCRIBE rtsp://example.com/media.mp4 RTSP/1.0 CSeq: 2 S->C: RTSP/1.0 200 OK CSeq: 2 Content-Base: rtsp://example.com/media.mp4 Content-Type: application/sdp Content-Length: 460 m=video 0 RTP/AVP 96 a=control:streamid=0 a=range:npt=0-7.741000 a=length:npt=7.741000 a=rtpmap:96 MP4V-ES/5544 a=mimetype:string;"video/MP4V-ES" a=AvgBitRate:integer;304018 a=StreamName:string;"hinted video track" m=audio 0 RTP/AVP 97 a=control:streamid=1 a=range:npt=0-7.712000 a=length:npt=7.712000 a=rtpmap:97 mpeg4-generic/32000/2 a=mimetype:string;"audio/mpeg4-generic" a=AvgBitRate:integer;65790 a=StreamName:string;"hinted audio track" SETUP A SETUP request specifies how a single media stream must be transported. This must be done before a PLAY request is sent. The request contains the media stream URL and a transport specifier. This specifier typically includes a local port for receiving RTP data (audio or video), and another for RTCP data (meta information). The server reply usually confirms the chosen parameters, and fills in the missing parts, such as the server's chosen ports. Each media stream must be configured using SETUP before an aggregate play request may be sent. C->S: SETUP rtsp://example.com/media.mp4/streamid=0 RTSP/1.0 CSeq: 3 Transport: RTP/AVP;unicast;client_port=8000-8001 S->C: RTSP/1.0 200 OK CSeq: 3 Transport: RTP/AVP;unicast;client_port=8000-8001;server_port=9000-9001 Session: 12345678 PLAY A PLAY request will cause one or all media streams to be played. Play requests can be stacked by sending multiple PLAY requests. The URL may be the aggregate URL (to play all media streams), or a single media stream URL (to play only that stream). A range can be specified. If no range is specified, the stream is played from the beginning and plays to the end, or, if the stream is paused, it is resumed at the point it was paused. C->S: PLAY rtsp://example.com/media.mp4 RTSP/1.0 CSeq: 4 Range: npt=5-20 Session: 12345678 S->C: RTSP/1.0 200 OK CSeq: 4 Session: 12345678 RTP-Info: url=rtsp://example.com/media.mp4/streamid=0;seq=9810092;rtptime=3450012 PAUSE A PAUSE request temporarily halts one or all media streams, so it can later be resumed with a PLAY request. The request contains an aggregate or media stream URL. A range parameter on a PAUSE request specifies when to pause. When the range parameter is omitted, the pause occurs immediately and indefinitely. C->S: PAUSE rtsp://example.com/media.mp4 RTSP/1.0 CSeq: 5 Session: 12345678 S->C: RTSP/1.0 200 OK CSeq: 5 Session: 12345678 RECORD This method initiates recording a range of media data according to the presentation description. The time stamp reflects start and end time(UTC). If no time range is given, use the start or end time provided in the presentation description. If the session has already started, commence recording immediately. The server decides whether to store the recorded data under the request URI or another URI. If the server does not use the request URI, the response should be 201 and contain an entity which describes the states of the request and refers to the new resource, and a Location header. C->S: RECORD rtsp://example.com/media.mp4 RTSP/1.0 CSeq: 6 Session: 12345678 S->C: RTSP/1.0 200 OK CSeq: 6 Session: 12345678 ANNOUNCE The ANNOUNCE method serves two purposes: When sent from client to server, ANNOUNCE posts the description of a presentation or media object identified by the request URL to a server. When sent from server to client, ANNOUNCE updates the session description in real-time. If a new media stream is added to a presentation (e.g., during a live presentation), the whole presentation description should be sent again, rather than just the additional components, so that components can be deleted. C->S: ANNOUNCE rtsp://example.com/media.mp4 RTSP/1.0 CSeq: 7 Date: 23 Jan 1997 15:35:06 GMT Session: 12345678 Content-Type: application/sdp Content-Length: 332 v=0 o=mhandley 2890844526 2890845468 IN IP4 126.16.64.4 s=SDP Seminar i=A Seminar on the session description protocol u=http://www.cs.ucl.ac.uk/staff/M.Handley/sdp.03.ps e=mjh@isi.edu (Mark Handley) c=IN IP4 224.2.17.12/127 t=2873397496 2873404696 a=recvonly m=audio 3456 RTP/AVP 0 m=video 2232 RTP/AVP 31 S->C: RTSP/1.0 200 OK CSeq: 7 TEARDOWN A TEARDOWN request is used to terminate the session. It stops all media streams and frees all session related data on the server. C->S: TEARDOWN rtsp://example.com/media.mp4 RTSP/1.0 CSeq: 8 Session: 12345678 S->C: RTSP/1.0 200 OK CSeq: 8 GET_PARAMETER The GET_PARAMETER request retrieves the value of a parameter of a presentation or stream specified in the URI. The content of the reply and response is left to the implementation. GET_PARAMETER with no entity body may be used to test client or server liveness ("ping"). S->C: GET_PARAMETER rtsp://example.com/media.mp4 RTSP/1.0 CSeq: 9 Content-Type: text/parameters Session: 12345678 Content-Length: 15 packets_received jitter C->S: RTSP/1.0 200 OK CSeq: 9 Content-Length: 46 Content-Type: text/parameters packets_received: 10 jitter: 0.3838 SET_PARAMETER This method requests to set the value of a parameter for a presentation or stream specified by the URI. C->S: SET_PARAMETER rtsp://example.com/media.mp4 RTSP/1.0 CSeq: 10 Content-length: 20 Content-type: text/parameters barparam: barstuff S->C: RTSP/1.0 451 Invalid Parameter CSeq: 10 Content-length: 10 Content-type: text/parameters barparam REDIRECT A REDIRECT request informs the client that it must connect to another server location. It contains the mandatory header Location, which indicates that the client should issue requests for that URL. It may contain the parameter Range, which indicates when the redirection takes effect. If the client wants to continue to send or receive media for this URI, the client MUST issue a TEARDOWN request for the current session and a SETUP for the new session at the designated host. S->C: REDIRECT rtsp://example.com/media.mp4 RTSP/1.0 CSeq: 11 Location: rtsp://bigserver.com:8001 Range: clock=19960213T143205Z- Embedded (Interleaved) Binary Data Certain firewall designs and other circumstances may force a server to interleave RTSP methods and stream data. This interleaving should generally be avoided unless necessary since it complicates client and server operation and imposes additional overhead. Interleaved binary data SHOULD only be used if RTSP is carried over TCP. Stream data such as RTP packets is encapsulated by an ASCII dollar sign (24 hexadecimal), followed by a one-byte channel identifier, followed by the length of the encapsulated binary data as a binary, two-byte integer in network byte order. The stream data follows immediately afterwards, without a CRLF, but including the upper-layer protocol headers. Each $ block contains exactly one upper-layer protocol data unit, e.g., one RTP packet. C->S: SETUP rtsp://example.com/media.mp4 RTSP/1.0 CSeq: 3 Transport: RTP/AVP/TCP;interleaved=0-1 S->C: RTSP/1.0 200 OK CSeq: 3 Date: 05 Jun 1997 18:57:18 GMT Transport: RTP/AVP/TCP;interleaved=0-1 Session: 12345678 C->S: PLAY rtsp://example.com/media.mp4 RTSP/1.0 CSeq: 4 Session: 12345678 S->C: RTSP/1.0 200 OK CSeq: 4 Session: 12345678 Date: 05 Jun 1997 18:59:15 GMT RTP-Info: url=rtsp://example.com/media.mp4; seq=232433;rtptime=972948234 S->C: $\000{2 byte length}{"length" bytes data, w/RTP header} S->C: $\000{2 byte length}{"length" bytes data, w/RTP header} S->C: $\001{2 byte length}{"length" bytes RTCP packet}

Real Time Mesaging Protocol

Real Time Messaging Protocol (RTMP) was initially a proprietary protocol developed by Macromedia for streaming audio, video and data over the Internet, between a Flash player and a server. Macromedia is now owned by Adobe, which has released an incomplete version of the specification of the protocol for public use. The RTMP protocol has multiple variations: The "plain" protocol which works on top of and uses TCP port number 1935 by default. RTMPS which is RTMP over an TLS/SSL connection. RTMPE which is RTMP encrypted using Adobe's own security mechanism. While the details of the implementation are proprietary, the mechanism uses industry standard cryptography primitives.[1] RTMPT which is encapsulated within HTTP requests to traverse firewalls. RTMPT is frequently found utilizing cleartext requests on TCP ports 80 and 443 to bypass most corporate traffic filtering. The encapsulated session may carry plain RTMP, RTMPS, or RTMPE packets within. While the primary motivation for RTMP was to be a protocol for playing Flash video, it is also used in some other applications, such as the Adobe LiveCycle Data Services ES. Contents [hide] 1 Basic operation 1.1 Encryption 1.2 HTTP tunneling 2 Specification document 3 Packet structure 3.1 Invoke Message Structure (0x14, 0x11) 3.2 Ping Message Structure (0x04) 3.3 ServerBw/ClientBw Message Structure (0x05, 0x06) 3.4 Set Chunk Size (0x01) 4 The protocol 4.1 Handshake 4.2 Connect 4.3 Play video 5 HTTP tunneling (RTMPT) 6 Software implementations 6.1 Client software 6.1.1 rtmpdump 6.1.2 FLVstreamer 6.2 Server software 6.3 Research and development 7 See also 8 References 9 External links Basic operation[edit] RTMP is a TCP-based protocol which maintains persistent connections and allows low-latency communication. To deliver streams smoothly and transmit as much information as possible, it splits streams into fragments and their size is negotiated dynamically between the client and server while sometimes it is kept unchanged: the default fragment sizes are 64-bytes for audio data, and 128 bytes for video data and most other data types. Fragments from different streams may then be interleaved, and multiplexed over a single connection. With longer data chunks the protocol thus carries only a one-byte header per fragment, so incurring very little overhead. However, in practice individual fragments are not typically interleaved. Instead, the interleaving and multiplexing is done at the packet level, with RTMP packets across several different active channels being interleaved in such a way as to ensure that each channel meets its bandwidth, latency, and other quality-of-service requirements. Packets interleaved in this fashion are treated as indivisible, and are not interleaved on the fragment level. The RTMP defines several virtual channels on which packets may be sent and received, and which operate independently of each other. For example, there is a channel for handling RPC requests and responses, a channel for video stream data, a channel for audio stream data, a channel for out-of-band control messages (fragment size negotiation, etc.), and so on. During a typical RTMP session, several channels may be active simultaneously at any given time. When RTMP data is encoded, a packet header is generated. The packet header specifies, amongst other matters, the id of the channel on which it is to be sent, a timestamp of when it was generated (if necessary), and the size of the packet's payload. This header is then followed by the actual payload content of the packet, which is fragmented according to the currently agreed-upon fragment size before it is sent over the connection. The packet header itself is never fragmented, and its size does not count towards the data in the packet's first fragment. In other words, only the actual packet payload (the media data) is subject to fragmentation. At a higher level, the RTMP encapsulates MP3 or AAC audio and FLV1 video multimedia streams, and can make remote procedure calls (RPCs) using the Action Message Format. Any RPC services required are made asynchronously, using a single client/server request/response model, such that real-time communication is not required.[clarification needed][2] Encryption[edit] RTMP sessions may be encrypted using either of two methods: Using industry standard TLS/SSL mechanisms. The underlying RTMP session is simply wrapped inside a normal TLS/SSL session. Using RTMPE, which wraps the RTMP session in a lighter-weight encryption layer. It is generally understood that the TLS/SSL handshake at the beginning of a session is very computationally intensive. Adobe developed RTMPE as a lighter weight alternative,[3] to make it more practical for high-traffic sites to serve encrypted content. Adobe advertises RTMPE as a method for secure content delivery, protecting against client impersonation[4] but this claim is false. RTMPE only uses[1] Anonymous Diffie-Hellman which provides no verification of either party's identity, and as such is vulnerable to trivial man-in-the-middle attacks at session initialization. HTTP tunneling[edit] In RTMP Tunneled (RTMPT), RTMP data is encapsulated and exchanged via HTTP, and messages from the client (the media player, in this case) are addressed to port 80 (the default for HTTP) on the server. While the messages in RTMPT are larger than the equivalent non-tunneled RTMP messages due to HTTP headers, RTMPT may facilitate the use of RTMP in scenarios where the use of non-tunneled RTMP would otherwise not be possible, such as when the client is behind a firewall that blocks non-HTTP and non-HTTPS outbound traffic. The protocol works by sending commands through the POST url and AMF messages through the POST body. An example is POST /open/1 HTTP/1.1 for a connection to be opened. Specification document[edit] Adobe released what it claimed was the RTMP specification on 15 June 2009. That specification, however, omits crucial details of the protocol's implementation. It would be impossible to write a program incorporating the RTMP protocol based on the released specification alone; many essential details are omitted, and only limited additional facts can be determined by studying other implementations that use the protocol (such as librtmp), and by carrying out test TCP/IP packet captures. The Adobe license to use this protocol requires that implementations of RTMP servers meet this specification. Details missing from Adobe's published specification include: No word about the real RTMP handshake. If done incorrectly, a server implementation is unable to deliver H.264/AAC content. Flash player silently fails the H.264 content if the handshake is wrong. However, all client implementations will work because usually rtmp servers are more permissive in this regard (including FMS) The fact that chunks are sent up to a maximum chunk size only; and that where a chunk exceeds that size it is still sent, with a header giving the total chunk size, but that after the maximum chunk size has been exceeded, a type 4 chunk header is then sent, starting the next part of the fragmented chunk. Explanations for some control messages for streams are missing (31 and 32). FMS sends them from time to time. Packet structure[edit] RTMP Packet Diagram Packets are sent over a TCP connection which are established first between client and server. They contain a header and a body which, in the case of connection and control commands, is encoded using the Action Message Format (AMF). The header is split into the Basic Header (shown as detached from the rest, in the diagramme) and Chunk Message Header. The Basic Header is the only constant part of the packet and is usually composed of a single composite byte, where the 2 most significant bits are the Chunk Type (fmt in the specification) and the rest form the Stream ID. Depending on the value of the former, some fields of the Message Header can be omitted and their value derived from previous packets while depending on the value of the latter, the Basic Header can be extended with 2 or 3 extra bytes (as in the case of the diagramme that has 3 bytes in total (c)). If the value of the remaining 6 bits of the Basic Header (BH) (least significant) is 0 then the BH is of 2 bytes and represents from Stream ID 64 to 319 (64+255); if the value is 1, then the BH is of 3 bytes (last 2 bytes encoded as 16bit Little Endian) and represents from Stream ID 64 to 65599 (64+65535); if the value is 2, then BH is of 1 byte and is reserved for low-level protocol control messages and commands. The Chunk Message Header contains meta-data information such as the message size (measured in bytes), the Timestamp Delta and Message Type. This last value is a single byte and defines whether the packet is an audio, video, command or "low level" RTMP packet such as an RTMP Ping. An example is shown below as captured when a flash client executes the following code:

Real Time Transport Protocol

The Real-time Transport Protocol (RTP) defines a standardized packet format for delivering audio and video over IP networks. RTP is used extensively in communication and entertainment systems that involve streaming media, such as telephony, video teleconference applications, television services and web-based push-to-talk features. RTP is used in conjunction with the RTP Control Protocol (RTCP). While RTP carries the media streams (e.g., audio and video), RTCP is used to monitor transmission statistics and quality of service (QoS) and aids synchronization of multiple streams. RTP is one of the technical foundations of Voice over IP and in this context is often used in conjunction with a signaling protocol[clarification needed] which assists in setting up connections across the network. RTP is originated and received on even port numbers and the associated RTCP communication uses the next higher odd port number. RTP was developed by the Audio-Video Transport Working Group of the Internet Engineering Task Force (IETF) and first published in 1996 as RFC 1889, superseded by RFC 3550 in 2003. Internet protocol suite Application layer DHCP DHCPv6 DNS FTP HTTP IMAP IRC LDAP MGCP NNTP BGP NTP POP RPC RTP RTSP RIP SIP SMTP SNMP SOCKS SSH Telnet TLS/SSL XMPP more... Transport layer TCP UDP DCCP SCTP RSVP more... Internet layer IP IPv4 IPv6 ICMP ICMPv6 ECN IGMP IPsec more... Link layer ARP/InARP NDP OSPF Tunnels L2TP PTPP Media access control Ethernet DSL ISDN FDDI DOCSIS more... v t e Contents [hide] 1 Overview 1.1 Protocol components 1.2 Sessions 2 Profiles and Payload formats 3 Packet header 4 RTP-based systems 5 RFC references 6 See also 7 Notes 8 References 9 External links Overview[edit] RTP is designed for end-to-end, real-time, transfer of stream data. The protocol provides facilities for jitter compensation and detection of out of sequence arrival in data, which are common during transmissions on an IP network. RTP allows data transfer to multiple destinations through IP multicast.[1] RTP is regarded as the primary standard for audio/video transport in IP networks and is used with an associated profile and payload format.[2] Real-time multimedia streaming applications require timely delivery of information and can tolerate some packet loss to achieve this goal. For example, loss of a packet in audio application may result in loss of a fraction of a second of audio data, which can be made unnoticeable with suitable error concealment algorithms.[3] The Transmission Control Protocol (TCP), although standardized for RTP use,[4] is not normally used in RTP applications because TCP favors reliability over timeliness. Instead the majority of the RTP implementations are built on the User Datagram Protocol (UDP).[3] Other transport protocols specifically designed for multimedia sessions are SCTP[5] and DCCP, although, as of 2010, they are not in widespread use.[6][not in citation given] RTP was developed by the Audio/Video Transport working group of the IETF standards organization. RTP is used in conjunction with other protocols such as H.323 and RTSP.[2] The RTP standard defines a pair of protocols, RTP and RTCP. RTP is used for transfer of multimedia data, and the RTCP is used to periodically send control information and QoS parameters.[7] Protocol components[edit] The RTP specification describes two sub-protocols: The data transfer protocol, RTP, which deals with the transfer of real-time data. Information provided by this protocol include timestamps (for synchronization), sequence numbers (for packet loss and reordering detection) and the payload format which indicates the encoded format of the data.[8] The control protocol RTCP is used to specify quality of service (QoS) feedback and synchronization between the media streams. The bandwidth of RTCP traffic compared to RTP is small, typically around 5%.[8][9] An optional signaling protocol such as H.323, Session Initiation Protocol (SIP), or Jingle (XMPP) An optional media description protocol such as Session Description Protocol Sessions[edit] An RTP Session is established for each multimedia stream. A session consists of an IP address with a pair of ports for RTP and RTCP. For example, audio and video streams will have separate RTP sessions, enabling a receiver to deselect a particular stream.[10] The ports which form a session are negotiated using other protocols such as RTSP (using SDP in the setup method)[11] and SIP. According to the specification, an RTP port should be even and the RTCP port is the next higher odd port number. RTP and RTCP typically use unprivileged UDP ports (1024 to 65535),[12] but may use other transport protocols (most notably, SCTP and DCCP) as well, as the protocol design is transport independent. Profiles and Payload formats[edit] See also: RTP audio video profile One of the design considerations of RTP was to carry a range of multimedia formats (such as H.264, MPEG-4, MJPEG, MPEG, etc.) and allow new formats to be added without revising the RTP standard. The design of RTP is based on the architectural principle known as application level framing (ALF). The information required by a specific application's needs is not included in the generic RTP header, but is instead provided through RTP profiles and payload formats.[7] For each class of application (e.g., audio, video), RTP defines a profile and one or more associated payload formats.[7] A complete specification of RTP for a particular application usage will require a profile and payload format specification(s).[13]:71 The profile defines the codecs used to encode the payload data and their mapping to payload format codes in the Payload Type (PT) field of the RTP header (see below). Each profile is accompanied by several payload format specifications, each of which describes the transport of a particular encoded data.[2] The audio payload formats include G.711, G.723, G.726, G.729, GSM, QCELP, MP3, and DTMF, and the video payload formats include H.261, H.263,[14] H.264, and MPEG-4.[14][15] Examples of RTP Profiles include: The RTP profile for Audio and video conferences with minimal control (RFC 3551) defines a set of static payload type assignments, and a mechanism for mapping between a payload format, and a payload type identifier (in header) using Session Description Protocol (SDP). The Secure Real-time Transport Protocol (SRTP) (RFC 3711) defines a profile of RTP that provides cryptographic services for the transfer of payload data.[16] The experimental Control Data Profile for RTP (RTP/CDP[17]) for machine-to-machine communications. Packet header[edit] RTP packet header bit offset 0-1 2 3 4-7 8 9-15 16-31 0 Version P X CC M PT Sequence Number 32 Timestamp 64 SSRC identifier 96 CSRC identifiers ... 96+32×CC Profile-specific extension header ID Extension header length 128+32×CC Extension header ... The RTP header has a minimum size of 12 bytes. After the header, optional header extensions may be present. This is followed by the RTP payload, the format of which is determined by the particular class of application.[18] The fields in the header are as follows: Version: (2 bits) Indicates the version of the protocol. Current version is 2.[19] P (Padding): (1 bit) Used to indicate if there are extra padding bytes at the end of the RTP packet. A padding might be used to fill up a block of certain size, for example as required by an encryption algorithm. The last byte of the padding contains the number of padding bytes that were added (including itself).[13]:12[19] X (Extension): (1 bit) Indicates presence of an Extension header between standard header and payload data. This is application or profile specific.[19] CC (CSRC Count): (4 bits) Contains the number of CSRC identifiers (defined below) that follow the fixed header.[13]:12 M (Marker): (1 bit) Used at the application level and defined by a profile. If it is set, it means that the current data has some special relevance for the application.[13]:13 PT (Payload Type): (7 bits) Indicates the format of the payload and determines its interpretation by the application. This is specified by an RTP profile. For example, see RTP Profile for audio and video conferences with minimal control (RFC 3551).[20] Sequence Number: (16 bits) The sequence number is incremented by one for each RTP data packet sent and is to be used by the receiver to detect packet loss and to restore packet sequence. The RTP does not specify any action on packet loss; it is left to the application to take appropriate action. For example, video applications may play the last known frame in place of the missing frame.[21] According to RFC 3550, the initial value of the sequence number should be random to make known-plaintext attacks on encryption more difficult.[13]:13 RTP provides no guarantee of delivery, but the presence of sequence numbers makes it possible to detect missing packets.[1] Timestamp: (32 bits) Used to enable the receiver to play back the received samples at appropriate intervals. When several media streams are present, the timestamps are independent in each stream, and may not be relied upon for media synchronization. The granularity of the timing is application specific. For example, an audio application that samples data once every 125 µs (8 kHz, a common sample rate in digital telephony) could use that value as its clock resolution. The clock granularity is one of the details that is specified in the RTP profile for an application.[21] SSRC: (32 bits) Synchronization source identifier uniquely identifies the source of a stream. The synchronization sources within the same RTP session will be unique.[13]:15 CSRC: (32 bits each) Contributing source IDs enumerate contributing sources to a stream which has been generated from multiple sources.[13]:15 Extension header: (optional) The first 32-bit word contains a profile-specific identifier (16 bits) and a length specifier (16 bits) that indicates the length of the extension (EHL=extension header length) in 32-bit units, excluding the 32 bits of the extension header.[13]:17 RTP-based systems[edit] A complete network based system includes other protocols and standards in conjunction with RTP. Protocols such as SIP, Jingle, RTSP, H.225 and H.245 are used for session initiation, control and termination. Other standards, such as H.264, MPEG and H.263, are used to encode the payload data as specified via RTP Profile.[22] An RTP sender captures the multimedia data, then encodes, frames and transmits it as RTP packets with appropriate timestamps and increasing sequence numbers. Depending on the RTP Profile in use, the sender may set the Payload Type field. The RTP receiver captures the RTP packets, detects missing packets, and may reorder packets. It decodes the frames according to the payload format and presents the stream to its user.[22]
Tú Bravo

HTTP Live Streaming

HTTP Live Streaming (also known as HLS) is an HTTP-based media streaming communications protocol implemented by Apple Inc. as part of their QuickTime and iOS software. It works by breaking the overall stream into a sequence of small HTTP-based file downloads, each download loading one short chunk of an overall potentially unbounded transport stream. As a result, a client media player (such as iTVmediaPlayer or VLC Media Player) can begin playing the data (such as a movie) before the entire file has been transmitted. As the stream is played, the client may select from a number of different alternate streams containing the same material encoded at a variety of data rates, allowing the streaming session to adapt to the available data rate. At the start of the streaming session, it downloads an extended M3U (m3u8) playlist containing the metadata for the various sub-streams which are available.[1] Since its requests use only standard HTTP transactions, HTTP Live Streaming is capable of traversing any firewall or proxy server that lets through standard HTTP traffic, unlike UDP-based protocols such as RTP. This also allows content to be delivered over widely available CDNs. HLS also specifies a standard encryption mechanism[2] using AES and a method of secure key distribution using HTTPS with either a device specific realm login or HTTP cookie which together provide a simple DRM system. Later versions of the protocol also provide for trick mode fast-forward and rewind and integration of subtitles. upLynk has also added the AES scrambling and base-64 encoding of the DRM content key with a 128-bit device specific key for registered commercial devices together with a sequential initialization Vector for each chunk to their implementation of the standard.[3] Apple has documented HTTP Live Streaming as an Internet Draft (Individual Submission), the first stage in the process of submitting it to the IETF as an Informational Request for Comments. However, while Apple has submitted occasional minor updates to the draft, no additional steps appear to have been taken towards IETF standardization.[4] Contents [hide] 1 Server implementations 2 Usage 3 Supported players and servers 3.1 Clients 3.2 Servers 4 See also 5 References 6 External links Server implementations[edit] Adobe Media Server supports HLS for iOS devices (HLS) and Protected HTTP Live Streaming (PHLS). Akamai supports HLS for live and on-demand streams. Cisco Systems: Supports full end to end delivery for Live/TSTV/VOD and Cloud DVR services. EdgeCast Networks supports cross-device streaming using HLS. Helix Universal Server from RealNetworks supports iPhone OS 3.0 and later for live and on-demand HTTP Live or On-Demand streaming of H.264 and AAC content to iPhone, iPad and iPod. IIS Media Services from Microsoft supports live and on-demand Smooth Streaming and HTTP Live Streaming. InstaTV Server supports HTTP Live Streaming of ATSC/ClearQAM HDTV from Windows PC with any TV tuner card to iPhone, iPod, and iPad. Level 3 supports HLS live and on-demand streams. Limelight Networks supports HLS for some accounts.[5] Mistserver supports HLS in live, on-demand and live replay mode Nginx with the RTMP Module supports HLS in live mode. TVersity supports HLS in conjunction with on-the-fly transcoding for playback of any video content on iOS devices. Unreal Media Server supports low latency HLS as of version 9.5. VBrick Distributed Media Engine supports HLS for serving live and on-demand HLS. VLC Media Player supports HLS for serving live and on-demand streams as of version 2.0[6] VODOBOX Live Server supports HLS for iPhone, iPad, iPod, Google Android devices (Honeycomb 3.0 and above) and Adobe Flash Player with HLS plugin. Wowza Media Server from Wowza Media Systems supports HLS for live and on-demand streaming. Usage[edit] Adobe Systems demonstrated an update to its Adobe Flash Media Server product supporting HTTP Live Streaming at the NAB Show in April 2011 Apple Inc. used this on September 1, 2010 to stream their iPod Keynote event live over the internet, and on October 20, 2010 to stream their 'Back to the Mac' Keynote event live over the internet. Google added HTTP Live Streaming support in Android Honeycomb and later.[7] Helix Universal Server from RealNetworks supports iPhone OS 3.0 and later for live and on-demand HTTP Live or On-Demand streaming of H.264 and AAC content to iPhone, iPad and iPod initial release April 2010, latest release November 2012 HLSProvider provides HTTP Live Streaming support for Chromeless Flash Player, JWPlayer, and OSMF 2.0 since May 2013 [8] HP added HTTP Live Streaming support in webOS 3.0.5.[9] Livestation streams numerous TV channels such as France 24, RT, and Al Jazeera English.[1] Microsoft added support for HTTP Live Streaming in IIS Media Services 4.0.[10] Flussonic added HTTP Live Streaming and Video On Demand support in January 21, 2009. Onlinelib added HTTP Live Streaming support in HLS Player and SDK for flash version 2.0, Plugin for JW-Player, OSMF 2.0 and Adobe Strobe Media Playback[11] Wowza Media Systems released Wowza Media Server 2.0 with full support for HTTP Live Streaming on December 9, 2009[12] Yospace added HTTP Live Streaming support in Yospace HLS Player and SDK for flash version 1.0[13]
Hello