Feb 27, 2007

Dcentralized Content Distribution Network

What would happen if a single web server were suddenly inundated with an abnormally high amount of bandwidth, such that the cost of serving the content far outweighed the acceptable cost per user ratio intended by the publisher? This is known as the "slashdot" effect and can be applied to a number of scenarios across the digital publishing realm.

What inevitably happens is that, a small publisher of content creates a particular amount of media (be it an article, graphic, executable application, etc) that somehow finds massive appeal and favor among millions of people around the world. This in turn opens a virtual floodgate of people trying to access the content online, and thus the servers (of which the publisher is more than likely leasing) are collectively bombarded with content requests.

For a fortified server, such is usually not a problem as bandwidth is appropriated accordingly. But for low bandwidth servers (or say, publishers who cannot afford large bandwidth) we see either that the website in question is obliterated under the weight of the traffic, or that the host charges massive overages to the publisher - of which the publisher many times cannot afford.

Decentralized Content Distribution in this scenario can easily alleviate the major cause of this, while using very little of the network resources individually, but overall serving the content as though it were from a high speed server. Undeniably, 10k/sec from a large amount of servers adds up in the end just as pennies can add up in the large scale of things.

So say your server was being requested of 10k/sec. On your access per month, this amounts to very little and is inconsequential. But now multiply that by 200 servers, and you see that the content can be served at around 2000k/sec or higher (depending on the actual Kb/sec each server is appropriating). So, like a vest of Kevlar, when the bullet (the millions of requests) hit the main server, the Decentralized Network intercepts the bullet and spreads the traffic across a large area in small doses.

For websites, this seems to work amazingly well and, in fact, some websites currently handle their traffic this way (with 25 million users even though the main server remains low on bandwidth usage). The idea here is that I got to thinking about how this could be used in order to solve an age old problem concerning online Virtual Reality.

In one of the earliest written synopsis for an online virtual community (Habitat) Morningstar and Farmer pointed out that in order for a wide scale virtual environment to continually grow the servers of the content could not be centralized. Decentralization was key to the continued advancement and growth of virtual environments. I would like to believe also that this pertains to the "environment server" as well as the content server (many times these two are seperate), though I am currently focusing on a solution for the content end (with the environment server being a consideration well into the future).

When I muse of this, what I am thinking is that while current systems can easily handle tens of thousands of simultaneous users, the price per user begins to see an exponential increase in cost proportional to the amount of users. In essence, we see an exponential increase when the system is centralized.

Taking this into account, even the best planned and programmed system will begin to see an ultimate limit as to how many users can co-exist per centralized server before the costs of that system become prohibitive to it's expansion. And herein lies the point of Decentralized Content Distribution Networks.

While I am unfamiliar with the detailed mechanics of systems such as Second Life or There, I do understand that they take a fundamentally similar approach to the content that is served in their virtual environments. While Second Life effectively "streams" content using a derivative of the RealMedia system (as Phillip Rosedale was one of the creators of the format) I am want to believe that the content still comes from a centralized system of servers. In this much I am certain, though it is quite possible that the Second Life system also groups persons by node in order to offload the distribution of the content. I could be very wrong in the last aspect, but the behavior of the system seems to warrant such node grouping.

In recent months, I have read such articles wherein a self replicating object in the Second Life environment has managed to disable their servers, so in light of this new information I am now leaning toward the idea that they are attempting to run the Second Life environment on a centralized system.

This in itself is fine if a company in this field wishes to establish a user base of no larger than 100,000, but around such time, the cost per user will begin to skyrocket out of control as the numbers increase past this threshold.

In the end, seeing as I am interested in the theoretical applications of virtual environments and it's advancement, I have found that the Active Worlds environment seems to be the most receptive to this line of testing, and have continued toward attempting to find solutions to problems outlined in the synopsis "Lessons Learned From Lucasfilm's Habitat" Morningstar & Farmer, of which happens to be attempting to address the centralization problem.

Over the past few months I have indeed found a very plausible candidate for solving this Decentralization aspect within the Active Worlds environment by routing the Object Path through a server-side P2P network which self replicates and organizes the information from a main server across 260 high bandwidth servers around the world as a specific type of hash file. The more bandwidth that the Object Path requires, the more efficient the CDN will become, even though in beginning to use it I have noticed variable speeds (but not enough testing or time has been given to this in order to gauge it's overall ability).

One thing that I have immediately noticed is that it actually works in the Active Worlds environment, though entering the specific address to begin with in the World Properties Dialog and hitting apply will crash your browser, after you restart the browser you immediately realize that while it crashed your browser the change in address did take effect and the Object Path is being redirected through the CDN, thus beginning the hash replication and self organization across the whole network (server side and not user side).

After some various testing in the virtual world, I have noticed that it has a 99% accuracy when downloading the various files from the distributed network (99% as in that maybe 5 or 6 files will randomly time out while trying to be located and thus not properly loading, though simply exiting and re-entering the world seems to automatically correct this problem).

While I will not disclose exactly how this method is accomplished, I will say that for the most part it works just fine. I would imagine that it would perform much better under much higher stress than a few users entering at random, so this method would be best used for virtual environments where you are expecting a much higher stress on the Object Path (hundreds, thousands or more users). In theory, it would be possible to minimize the overall bandwidth consumption of an entire universe server's set of Object Paths even if it were to be filled to the brink with users (possibly serve every Object Path in a universe from a single cable modem and a home computer with low latency).

So from the theoretical application to the proof of concept, at least the serving of the content part can be decentralized as suggested by Morningstar and Farmer.

Feb 26, 2007

Update of Doom

So among the things I was up to today, updating the website was one of them. I had previously thrown in a ton of AJAX and javascript for the site but never actually put them to use, instead simply laying the foundations for a later date. While I am not completely finished with this AJAX thing, I did manage to accomplish quite a bit for this round. And then something entirely odd happened...

Firefox decided it no longer wished to display PNG files. Actually, I should rephrase that - Firefox refused to display one particular PNG file I was trying to use for the new design. The image in question happened to be the new header image with the AW Gate, of which I originally saved as banner1.png on the server. No big deal, I figured, since I had already used PNG format for most of the site.

So I go to load it up in Firefox to check what it looks like and it simply doesn't show up.

Weird... so I crack open Internet Explorer 7 and it shows up fine. Stranger still. Then I ask a friend to check out the page using Opera (you know, the code nazi) and it shows up in Opera. Now I'm totally baffled...

Once again I open Firefox and still it doesn't show up. At this point I start tearing apart the HTML by hand to see if there is a missing div tag or something that IE7 would overlook... no dice. For all intents and purposes, the code was fine and it should have worked.

So after about an hour of tearing out my hair and using words reserved for less of a family occasion, I finally just open Photoshop thinking "This is a stupid reason for Firefox not to see this... it can't possibly be the reason..."

So I open the graphic and simply rename it. Uploaded it to the server and bingo... it worked in firefox.

What the heck just happened here? The world may never know... in any event, along with using the reflect tag for some images on the site (I had previously set the framework up for this but never used it) I also redid the Updates section to match the theme of the website. Again, I used a framework I had set up but previously didn't use (being the AJAX RSS feeder) to pull the RSS for this section you are reading into the custom layout for the website.

While I was at it, I decided to make a few more graphics for the website (logo for the Metaverse EX listing on the products page, and also the AW2EX) and throw the reflect on them. So all in all, things look good. There are still some things I need to work out for later (like the header for the templates section) and I am thinking about changing the Updates link to read News instead - apparently this may be confusing for some readers.

I also decided (out of shear boredom) to place a Digg link on the first page as a private joke to try and get a story about VR5 Online buried. Seriously, I wanted people to bury the story just out of boredom... lol. Either way I didn't care, but just wanted to see what would happen.

Also changed was the wording for the About Us section to reflect our laid back attitude (or more importantly, mine) and also to announce that VR5 Online apparently qualifies as a Web 3.0 company. Seriously I have no idea what that means, I was trying to validate some AJAX on the site to make sure it registered and the site that did the validation said we do not qualify as a Web 2.0 company, to which I raised an eyebrow.

Further down the list it was overjoyed to inform me that we were a Web 3.0 company instead. Now I'm just outright confused, but my train of thought says that 3.0 is better than 2.0 any day. So if Web 2.0 is the next big thing, then Web 3.0 must be like a religious experience to geeks or something...

Anyway, that's my news for today... other than that, I'm still messing around with the AJAX stuff. Oh wait.. one more thing...

I noticed recently that there seems to be two prominent frameworks for AJAX being used. In one corner you have Prototype and in another JQuery. Now, I know people from either camp could go on for years about the benefits of either, but here's the deal - I use JQuery simply because it more or less makes my life that much easier to code. I haven't had a chance to mess with Prototype in detail yet, but it seems a hell of a lot harder to work with than JQuery.

So what is the concern? Well the Web 2.0 validator looks specifically for Prototype when checking for Web 2.0 and not JQuery (or both). So many of the things that would normally qualify as Web 2.0 and AJAX simply are a no show to this validator because there is no instance of Prototype being used.

Just something that came to mind on the spur of the moment. Why is it that Web 2.0 has to be ridiculously hard to deal with in order to garner any credit? I mean, if somebody creates an easy to use framework, then why try to discredit it?

I don't think I'll even understand how this works... and trust me, I have over ten years in sociology, so you would think I would be able to have a bit of insight. After all of these years, the only thing I know for sure is that people in general are lunatics.

And I mean that in the nicest possible manner, of course...

Heavily Medicated and Enjoying the Padded Room - Darian Knight


Feb 14, 2007

Random Insanity... and AJAX

Ok, so I am currently working on a new site - HaruOMFG (www.leviviridae.com) which is now a web comic. The idea here was to make a website that was simple, artistic and above all else easy to update for the client.

In comes AJAX to save the day. Now don't get me wrong, I still hate AJAX with a passion, but in this case it made life somewhat easier. Instead of a ton of pages to manually write, I simply made one page and did a double asynchronous feed from two outside sources - namely her blog RSS and another blog made just for her comic.

And so I went to work adding the RSS feeder PHP and Java files to the server (rewriting a large chunk of both to suit this jury rigged approach to publishing) and viola! Now all the client needs to do is simply write in her blog (which she does frequently anyway) and post her full page comics to the other blog and the content will synchronize on her website automatically.

Brilliant!

So, while I hate AJAX with a passion, in this case I found it incredibly useful. It also helped that the client is an artist, so I wasn't forced to make graphics for her site elements like I normally do. Instead I asked her to hand draw the parts I needed (giving her the sizes for each) and then scan them in to send over to me. I of course would clean up the scans and so forth before using them.

Total time for completion? About 6 hours start to finish. Go ahead and tell me that isn't fast...

I know of clients who are paying other companies to do a website for them, and they are paying nearly $9,000 for the project, and they still are waiting an entire week to see results (and the technology used to create the websites are pathetic - BackOffice and HomeSite). So yeah, 6 hours start to finish is a record for me, personally.

God bless Dreamweaver. And not even the newest one either... I am actually happy to use Dreamweaver MX 2004 and Paintshop Pro 7. I have Photoshop and the latest Flash 8 (or whatever) but I still find after all of these years that much of this stuff can still be done just as well with old school approaches.

So HaruOMFG is considered 99% finished. It's online, the whole thing works, and now the rest is simply semantics. Which is cool because the actual deadline for this was March 2nd, 2007. As you can tell, I am way ahead of schedule with this one... lol.

In other News

Joe Longcor, our newest addition to VR5, has been working on the revamped Edge Radio website using Flash and a handful of other nifty techniques. He's been messing around with Flash for some time, but this would be his first full website design from scratch.

So far, so good :) Much better than the rival company Onalaskaweb.com had to offer. While I give Onalaska Web an A for effort, their designs aren't really up to par in today's world, nor are the prices they expect to earn for their designs.

More News Still...

I would like to apologize to the visitors in our world for the apparent neglect over the past few months. For the time being, VR5 Online has halted design in world while we attend to more important, high profile projects. I wish I could talk more about it, but I think the NDA disqualifies me from doing so... I can say though that what we are working on is awesome :)

Ok.. enough for today.. time to get back to work again.

Super Secret Secret Squirrel - Darian Knight

Feb 7, 2007

The Tin Age of Gaming

Last month, Jamie identified 5 Reasons Why PC Gaming Is Broken. Jamie's article caught the eye of William Burns from vr5-online.com who responds with his survey of and concerns about the entire video game industry, from past to present.

read more | digg story