::fibreculture:: At least 56K ?
Adrian Miles
adrian.miles at bowerbird.rmit.edu.au
Thu Mar 1 13:13:06 EST 2001
At 12:50 PM +1100 26/2/2001, melinda rackham wrote:
>does this also apply to getting data out.. i was appalled first
>time i tried to look at my work in europe and it took forever to
>download on a really fat university connection..(um forgive lack of
>techiness)
this is partly anecdotal. i have stuff on servers in the US that has
loaded blindingly fast at Melbourne University - considerably faster
than to my desktop from the server next to my desktop on the RMIT
LAN. i have looked at my content from my RMIT server in Europe and
America, performance varies but i have not noticed any great
difference between my content served from australia versus content
served in the country i'm in.
in terms of your question, yes. there are lots of factors here,
particularly if you're delivered things like video or audio.
Basically my server has 10MB ethernet onto a switched 10/100mb
network (though my building to the main rmit bit is only 10MB), and
then whatever RMIT has up to its gateway at Melbourne Uni (that is
big). This lets the server deliver a lot of content, but this content
is easily choked, as the follow example shows:
i have a 5MB quicktime clip (for instance)
client A requests it. Client A is at RMIT with 10/100MB into their machine.
My server sends client A all the data and bandwidth they can handle
(http servers are pretty dumb this way). this means, hypothetically,
client A gets pertty much all my bandwidth for as long as the
download takes. my server will accept other connections, but they're
going to get really crappy bandwidth. so my server, which could
hypothetically handle a 100 simultaenous requests, is slowed to a
crawl because it's got one client with a ton of bandwidth (same
applies to somone with a cable modem.)
the solution, in my example, is to use the qtchokespeed tag in my
quicktime movie, where i can tell quicktime to only deliver, say,
30Kb of bandwidth with that movie, so no matter how big your pipe you
won't 'steal' all my bandwidth. (one of the many reasons i tend to be
known as a quicktime advocate)
you'd be surprised how many people have quicktime or avi's on there
servers and don't realise that broadband access (on a LAN or cable)
can actually really slow down their server's performance for other
clients.
but to the general question. There are numerous things that affect
access speed, things like the use and proxy servers, speed of
servers, bandwidth out, etc, as others have commented on. But in
general it is probably useful to realise that bandwidth in australia
is not a Telstra monopoly, and so it's general policy issues rather
than specific commercial practices of a single company (though there
are issues given Telstra's role as common carrier).
Also I'd caution about generalisations about how bad Australian
bandwidth is. We need to discriminate who we're discussing, where,
and where their bandwidth comes from (who provides it). Bandwidth is
no longer a single entity controlled by Telstra. There are isps who
have their own pipes to the US, for instance, there's OPTUS of
course, the Universities, private corporations might have their own
bandwidth. Here upstream bandwidth is crucial, it's no good having a
modem pool of 100 if your isp only has a small connection upstream.
other anecdotal observations about bandwidth: in Norway i'm told
people tend to download from Australian or US servers as it is faster
than British (never tested it and think its fanciful, but it's the
perception). Having a fast server with a lots of network access makes
a major difference to the speed of content delivered, sounds obvious,
but its surprising the difference it makes to clients. also the
faster your computer, the faster the network runs in terms of
downloads. i assume its something to do with the system architecture
decoding packets but everytime i've upgraded my hardware in my
university there has been an immediate improvment in network
performance from my point of view, even where i'm still using 10MB
ethernet, and there has been no change in network infrastructure.
same applies to browsers and their rendering engines, and same
applies to ftp clients (i've had techs. using ftp client x and it is
lousy, suggest they use ftp client y and their 'bandwidth' improves
dramatically - its the crappy client their using, not the bandwidth,
that has changed). and if you're not very 'techie' but are relying on
your web browser to do your ftp work, well, that's like digging your
garden with your kitchen fork. There are better tools that do better
jobs and the blame lies with the tools.
Also, as others have commented, network speed is subject to numerous
variables. what impresses me in the US when you use a cable modem on
the eastern or western seaboard is simply how much infrastructure
there is behind that modem. My content loads damn fast there, that
means *big* pipes all the way, not just to the home, much like those
enourmous US freeways, that is as important for broadband
access/delivery as cable to your home. Not much point all of us
having cable if the ISP's connection to the world is only 2MB. And
remember there is *a lot* of the US where you cannot get cable access
finally, costs of broadband access etc, Mark Armstrong presented a
paper, i think last year (Julian Thomas at Swinburne told me about
it) which demonstrated that while most US Australian data traffic is
in our favour, payment is in the favour of the US. This, apparently,
happens for most countries in relation to the US and so we, in
effect, are subsidising US internet infrastructure costs which is one
of the reasons it is cheaper there.
So, from a policy point of view (got there in the end), this is very
important and requires the renegotiation of various international
agreements to more accurately reflect the genuine cost of traffic and
the direction of traffic. this sort of transparency might then help
in making costs more transparent here and in reducing the cost of
access. From a policy point of view I also think it is imperative
that users have a decent sized backchannel. If we can have a cable
modem which lets my computer become a small server in its own right,
with my content publically available, then that sort of distributed
content publication and delivery once again returns the role of this
technology to the users, which is the point.
cheers
adrian miles
--
lecturer in cinema studies and new media rmit university.
lecturer in new media university of bergen.
hypertext theory engine http://bowerbird.rmit.edu.au:8080/
video blog: vog http://hypertext.rmit.edu.au/vog/
More information about the Fibreculture
mailing list