In 1998, the internet was still in its infancy, and streaming video was a novel concept. The Victoria’s Secret fashion show in February 1999 marked a significant milestone by being one of the first events broadcasted online, attracting 1.5 million viewers. However, streaming at that time was far from perfect. Videos were tiny and choppy, and the internet infrastructure couldn’t handle large audiences.
Streaming video faced significant challenges due to the internet’s physical limitations. Servers could easily become overwhelmed if too many people accessed a website simultaneously. Additionally, geographical distance between users and servers resulted in slow loading times. The solution came in the form of Content Delivery Networks (CDNs), which helped distribute content more efficiently by bringing it closer to users.
Dr. Tom Leighton, an MIT professor, played a crucial role in developing CDNs. He realized that by duplicating content across multiple servers, it could be delivered faster and more reliably. His company, Akamai, became a leader in this field, helping to lay the groundwork for modern streaming services.
For streaming to become mainstream, it needed to be both fast and high-quality. This required advancements in bitrate and compression technologies. Bitrate determines how much data can be transferred per second, impacting video quality. Compression, using codecs, reduces file size while maintaining quality, similar to how frozen concentrated orange juice expands when water is added.
YouTube, launched in 2005, revolutionized video sharing but initially struggled with buffering issues. The introduction of adaptive bitrate technology allowed videos to adjust quality based on the viewer’s internet speed, reducing interruptions and improving the viewing experience.
Compression techniques, both lossy and lossless, play a vital role in streaming. Lossy compression reduces file size with some quality loss, while lossless compression maintains quality. Efficient compression ensures high-quality video with minimal data usage, enhancing user experience.
Platforms like Netflix use advanced compression and CDNs to deliver content efficiently. By strategically replicating popular shows across servers worldwide, they minimize lag and ensure quick access for viewers. This infrastructure allows for seamless streaming experiences.
As technology advances, streaming is expected to evolve further. The mobile industry is driving growth, with apps like Instagram and TikTok leading the way. Future improvements may focus on better compression for mobile devices and faster loading times for home TVs.
Cloud gaming represents the next big challenge for streaming technology. Unlike movies, games require real-time interaction and low latency. Companies like Sony, Nvidia, Google, and Microsoft are working on solutions to make cloud gaming as seamless as streaming a show on Netflix.
The potential of streaming extends beyond entertainment. With advancements in VR and AR, we could experience concerts, sports events, and educational activities from the comfort of our homes. This technology could transform how we live, learn, and interact with the world.
Streaming has come a long way from its humble beginnings, and its future holds exciting possibilities. As we continue to innovate and improve internet infrastructure, streaming could redefine our digital experiences, making them more immersive and accessible than ever before.
Research the history of streaming platforms, focusing on key milestones such as the Victoria’s Secret fashion show in 1999 and the rise of YouTube in 2005. Prepare a presentation highlighting how these events contributed to the evolution of streaming technology. Share your findings with the class, emphasizing the technological advancements that made streaming more accessible and reliable.
Participate in a workshop where you will learn about the role of CDNs in streaming. Work in groups to create a simple model demonstrating how CDNs distribute content to users efficiently. Discuss Dr. Tom Leighton’s contributions and how his work with Akamai has influenced modern streaming services.
Engage in a hands-on activity where you will experiment with different compression techniques and codecs. Use video editing software to compress a video file using both lossy and lossless methods. Analyze the impact on video quality and file size, and discuss how these techniques are crucial for streaming platforms to deliver high-quality content efficiently.
Analyze a case study on the implementation of adaptive bitrate technology by platforms like YouTube. Discuss how this technology improves user experience by adjusting video quality based on internet speed. Present your analysis to the class, highlighting the benefits and challenges of adaptive bitrate streaming.
Participate in a group discussion and debate on the future of streaming, focusing on emerging trends such as cloud gaming and VR/AR experiences. Consider the technological challenges and potential societal impacts of these advancements. Share your insights and predictions with your peers, fostering a dynamic conversation about the next frontier in streaming technology.
**Sanitized Transcript:**
The year was 1998. U.S. President Bill Clinton was one year into his second term when the scandal that would overshadow his presidency broke. He was set to stand before a grand jury and testify, and the nation was set to watch, some on their large cathode ray tube computers. With a little help from the worldwide web, a few months later, on February 5th, 1999, the Victoria’s Secret fashion show took the same approach, deciding this would be the very first year they would broadcast the show over the web. It attracted 1.5 million viewers. The rush to online video began, and webcasted events exploded in popularity. Consumers were hungry for internet video, but there was just one problem: streaming video in the late 1990s was an excruciating experience.
Think about trying to watch a video where you’re getting maybe one frame every two or three seconds, and it’s the size of a postage stamp. No joke, that’s how small the window was, and it didn’t work very well. Too many people tuned in, and there wasn’t enough infrastructure capacity in place at the time. Streaming couldn’t be a mainstream tool until these major quality hurdles could be overcome, and that wouldn’t be possible until the groundbreaking introduction of the Content Delivery Network (CDN).
This is the light bulb moment of a series that uncovers the surprising impact of less celebrated inventions and the moments of inspiration that made them possible. Though we like to think of the internet as a nebulous cloud floating above us, it’s much more physical than we give it credit for. In the early days, the physical limits of the internet were inescapable. If too many people visited a website at the same time, congestion would overwhelm the servers.
Then there was the issue of geography. The internet is a network of networks. If you’re trying to retrieve packets of information like a web page and the servers are located geographically far away from you, your computer has to send out a request that travels through different layers of the interconnected chain to retrieve that web page. Once that request is received, the server can approve it and start sending individual packets back through the chain. These packets have to travel the same long distance from the server back to your computer. Generally speaking, the farther away a server is, the longer it takes to retrieve the packets that make up a web page, which for a consumer means longer loading times.
This worked okay for static web pages, but for any large file, it was going to be a problem, and video required some heavy lifting. In 1995, streaming video was a dream at best; the internet just wasn’t built for it back then. Object sizes on the internet were very small. There was no video, no software downloads, and no large game patches. It was all JPEGs, GIFs, and maybe some HTML code. Video, by comparison, is massive. A short video clip could be a thousand times the size of a photo or more. Because of this, streaming video struggled to become mainstream in the ’90s.
At the time, the best way to transfer video was to host the file on a server and have the recipient spend a substantial amount of time downloading it. When it came to downloading videos, many times that was done simply because the connection you were on wouldn’t allow for smooth streaming. It just made more sense to download the video, wait until it was done, and then play it back on your computer locally without any stuttering or technical issues.
For the quality of streaming to improve, videos needed to start up faster and be more reliable, and computer engineers were up to the challenge. They created a solution called a Content Delivery Network (CDN), which revolutionized the internet. Companies like Sandpiper, Ibeam, and many others were in the first wave of folks to experiment with the idea of a CDN.
Dr. Tom Leighton, an MIT Applied Mathematics professor and algorithmic expert, was sitting down the hall from Tim Berners-Lee, the man who many refer to as the father of the World Wide Web. Berners-Lee posed a problem to the wider MIT community: because of the way the web was built, congestion was bound to become an issue that held people back from speedy web browsing. Dr. Leighton felt he could solve this congestion with math, using algorithms to intelligently duplicate and route content.
He and a few colleagues got to thinking: if geographical distance was the problem, just get rid of the distance. By replicating and duplicating the content, they could bring it closer to people. This lessens the number of hops that computers would have to make to retrieve information and reduces the likelihood that any one server would get overwhelmed. Leighton called his company Akamai, and it’s still one of the largest CDN companies in the world today.
CDNs are just part of the puzzle; they help get the content physically closer to you, which helps it load quickly. But making that content look good is a whole separate issue. The secret ingredients for getting high-resolution streaming video are bitrate and compression. Bitrate is essentially a measure of how much of a file you can transfer in real-time, usually measured per second. Larger files are generally higher quality, but streaming large files takes more bandwidth.
In video, bitrate is everything. When we were on dial-up, there was just a limit to the amount of data you could push through the wire to the consumer. In the early days, bitrate was low, mostly due to physical constraints. But even with those limits, internet video exploded in the early 2000s, and no platform took off faster than YouTube. Founded in 2005, YouTube democratized video making. Its platform was reliable for the time, but it wasn’t without its faults.
If you were on YouTube in those early years, you probably remember that watching a video often involved buffering. In those early days, when you clicked play, you would have to wait several seconds for it to start. This was because you had to wait for enough data to arrive at your location. Many times, we experienced rebuffering, where you would watch for a few seconds and then it would freeze, requiring you to wait for the packets to come back.
This happened because if you were encoding a video for a specific bitrate, say 300 kilobits per second, that was the only bitrate you were encoding it for. If at any time while you were watching that video your bandwidth dropped below that rate, the video would stop and rebuffer. The solution came with adaptive bitrate technology, which allowed content owners to encode videos at multiple bitrates.
So, if you were watching at 500 kilobits per second and your bandwidth degraded, instead of the video stopping, it would just reduce the bitrate to 300 kilobits per second but keep playing. In simpler terms, when you’re watching a YouTube video in high definition and your connection weakens, the player will automatically drop you down to a lower resolution to ensure the video keeps playing.
Adaptive bitrate technology significantly improved the customer experience. The moment we could eliminate rebuffering, consumers watched more content. This technology is one of the best advancements in the industry. However, having a solid bitrate is only one part of the equation for improving the visual quality of internet video. The other part is compression and decompression, referred to as codecs.
For example, think of frozen concentrated orange juice. It starts off very concentrated, but when you add water, it expands to a larger volume. Video is similar; it starts off compressed and then decompresses on playback for better quality. There are two main types of compression: lossy and lossless. Lossy compression reduces file size with a noticeable reduction in quality, while lossless compression reduces file size without losing quality.
Lossless compression is preferred by video distributors because it enhances user experience. It relies on redundancies in each frame to determine where it can remove information. For instance, in a scene where the background doesn’t change much, only the portions where there is significant movement are redrawn for each frame. This results in high-quality video at a smaller file size.
For internet video to be efficient and high quality, all these technologies must work together. Let’s say Netflix is preparing to release a new season of a show. After editing, the video must be compressed. A team at Netflix works frame by frame to determine the best compression method, considering variables like lighting and colors. They compress the file to the smallest size possible while retaining the highest quality.
They also encode it at lower bitrates to provide options in case a user’s internet connection doesn’t support the highest resolution. These files are encoded in different formats specialized for various devices. It’s not uncommon for companies to have multiple encoding profiles to ensure the best quality.
Once the files are ready, they are uploaded to Netflix’s network. Given that popular shows are expected to be watched as soon as they launch, Netflix ensures that copies of each episode are replicated across all CDN servers strategically located worldwide. These servers act as local hubs for storing the files, allowing nearby consumers to retrieve them quickly.
This setup minimizes lag between clicking the play button and the content starting. All of this happens in advance, so when you click on a video, it loads virtually without delay. Of course, Netflix and other streaming services don’t replicate every show; they strategically replicate content they expect most viewers will watch. If you choose a show that hasn’t been saved on the nearest server, it might take a few extra seconds to load because your device may have to request it from a server that’s further away.
However, thanks to this infrastructure, that piece of content is now saved on a nearby server, allowing for quick access for other local consumers. All these technologies come together to deliver an experience that feels almost magical.
When was the last time you complained about video quality on Netflix? Probably never, or not in a long time. Streaming video on the internet is objectively amazing. It seems like it got good overnight, but in reality, it has taken 25 years of experimentation, collaboration, and learning to reach its current point.
So, how will streaming change in the next 25 years? Expectations suggest that the industry will continue to push technology forward in various ways. One of the biggest drivers of technological growth might be the mobile industry. In recent years, internet video has seen explosive growth in the mobile sector, thanks to apps like Instagram, TikTok, and YouTube.
This continual shift means a greater focus on improving mobile video content. One way this could manifest is through better compression for higher quality videos at smaller file sizes, keeping bitrates low while increasing resolution over cellular connections. In terms of watching content on your home TV, the next technological shift you may notice is even faster loading times.
Today, the industry is focused on improving startup time, or what we call time to first frame—how long it takes before you see that first frame of the video. Samsung and other TV manufacturers have released 8K TVs, emphasizing that increasing resolution will be a defining factor for the next wave of streaming technology. However, not everyone agrees.
Some believe that 8K doesn’t add any benefit for the type of content we’re watching or the devices we use. There needs to be a business logic behind this; consumers need a compelling reason to upgrade their TVs and other hardware. Many believe that 4K will remain the maximum resolution for adoption because there won’t be a significant benefit beyond that.
The idea that we might be reaching the pinnacle of video resolution is a testament to how far technology has come from its choppy, postage stamp-sized inception. The biggest strides in the next 25 years of streaming may focus on making the overall experience as frictionless as possible.
It’s not always about pushing out better quality; it’s about defining user experience. We’re also seeing other technologies providing immediate benefits, like voice control. While voice control may not be widely discussed, it has shown interesting results. Consumers using voice control tend to consume three to four times more hours of content than those who don’t.
This highlights the importance of ease of use; if it’s easy, consumers will use it more. The future may lean more towards voice control than machine learning or AI. While improving the quality and accessibility of streaming services on mobile and TV will be a priority, we are just beginning the streaming revolution.
Industry leaders are already working on the next frontier of streaming: video games. The goal is to stream entire games as easily as you stream a show on Netflix today. While most games allow for online social interaction, each requires players to download large files like maps, characters, and textures, which are stored locally.
Cloud gaming would free players from these large downloads, but it relies on similar underlying technologies. There are substantial hurdles to overcome; unlike movies, video games generally provide each player with a different perspective, and many games allow players to make decisions that influence visuals in unpredictable ways.
The technical complexity needed to stream a video game, as opposed to a movie, is exponentially greater. Additionally, gaming requires a super-fast connection to minimize any perceptible delay between a player’s action and the corresponding response on screen. Companies like Sony and Nvidia have been working on cloud gaming for years, while Google and Microsoft have recently entered the arena with their own services.
With advancements in internet infrastructure, there is renewed hope for a revolution in this industry. However, this is just the beginning. If we can build a robust cloud gaming infrastructure, it could pave the way for VR and AR technologies to reach levels we’ve only seen in science fiction. Imagine attending a concert from home or getting a front-row seat to the Super Bowl without leaving your couch.
This could revolutionize how we consume sports and entertainment. Beyond that, it could change how we live and learn. Instead of discussing volcanic activity in class, you could put on a headset and explore a volcano in VR. Instead of going to school or work, we might just wear a VR headset and connect to our digital environment.
At some point, we could find ourselves in a world reminiscent of “Ready Player One,” where a virtual oasis awaits us with just a tap of a VR headset. While all of this is a long way off, the technology we’ve built and refined to stream our favorite shows could one day serve as the backbone for a much greater system of immersive experiences.
As we move towards a future where video, video games, and potentially VR dominate the internet landscape, it’s clear that the next frontier for streaming will require new infrastructure and ingenuity. Ensuring access to reliable broadband internet will be paramount, and making these services affordable will be another challenge.
But once we overcome these obstacles, streaming’s full potential could be realized—a technology that will define our future.
Streaming – The continuous transmission of audio or video files from a server to a client, allowing playback to begin while the rest of the data is still being received. – Example sentence: “With the rise of streaming services, students can now watch lectures online without needing to download large video files.”
Technology – The application of scientific knowledge for practical purposes, especially in industry, including the development and use of devices, machines, and techniques. – Example sentence: “Advancements in technology have significantly enhanced the capabilities of modern computers and smartphones.”
Compression – The process of reducing the size of a data file or stream to save storage space or transmission time. – Example sentence: “File compression is essential for efficiently storing large datasets in cloud storage solutions.”
Codecs – Software or hardware that encodes or decodes a digital data stream or signal, often used in video and audio streaming. – Example sentence: “Choosing the right codecs can greatly improve the quality and performance of video streaming applications.”
Content – Information or experiences that are directed towards an end-user or audience, often delivered via digital media. – Example sentence: “Creating engaging digital content is crucial for attracting and retaining an online audience.”
Delivery – The method or process of distributing digital content to end-users, often involving networks and servers. – Example sentence: “Efficient content delivery networks ensure that users experience minimal buffering when streaming videos.”
Networks – Interconnected systems that allow computers and other devices to communicate and share resources. – Example sentence: “University campuses often have robust networks to support the high demand for internet access among students and faculty.”
Bitrate – The number of bits that are conveyed or processed per unit of time, often used to measure the quality of audio or video streams. – Example sentence: “Higher bitrate settings can improve video quality but may require more bandwidth for streaming.”
Mobile – Relating to portable computing devices such as smartphones and tablets, which allow users to access information and services on the go. – Example sentence: “Mobile applications have become an integral part of how students manage their academic and social lives.”
Gaming – The activity of playing electronic games, often involving interaction with a user interface to generate visual feedback on a device. – Example sentence: “The gaming industry has seen a surge in popularity, with many university students participating in online multiplayer games.”