I said in my opener about this blog that it was going to be about random things, not just work-related. Today's random thought is about my "accent". I haven't been living in Canada for that long now (a little over 3 months), but I'm pronouncing words differently. Sure, some of them are because every single person around me does, and so I'm just accomodating so I don't sound funny. Like "pro-cess" instead of "praw-cess". That's on purpose.
But the strangest is that I've noticed that I'm starting to pronounce "out" differently too. Not "oo-t" like American's are fond of imitating incorrectly. Candians don't say that. They say "ou-t", as if they're pronouncing the O and U. Americans say "ow-t". My change is subtle, but I noticed it because I wasn't trying in any way to modify that word. It's half-way between the way I used to say it and the way Candians say it. Candians would still identify me an American. But American's would think I'm saying it a little funny.
Strange what language does, eh?
Tuesday, October 27, 2009
Friday, October 23, 2009
QNX and Cisco's CRS-1: delivering the world's content at 2600 DVDs per second
Last week, Cisco did the favor of presenting to QNX some of the things being done with their CRS-1 (Carrier Router System), coincidentally powered by QNX Neutrino. That presentation brought out the geek in me.
It was pretty amazing to learn that IOS XR, Cisco's ultra reliable OS, lets them run 92 Terabits per second and uses hundreds of parallel processors is based on a straight, unmodified QNX kernel and QNX Transparent Distributed Processing. Now a Terabit is a lot, and 92 Tbps is hard for a mere mortal like me to understand, so I converted it into some more meaningful measurements. That's equivalent to squirting out 2,600 DVDs per second, or 18,000 CDs per second. Wow.
CRS-1 and QNX were responsible for delivering North American coverage of the Beijing 2008 Olympic Games. The High Def video feeds were all being pumped over the Pacific; HD video in real-time like this was a first. The video went through CRS-1 equipment first highly compressed, had local advertising inserted for each market, then re-integrated with the real-time video feed and shipped back out to their individual markets. With a high of 107 million people in just North America watching the games, that's a lot of pressure for failproof, glitch-free, real-time delivery.
Dozens of world-wide mega-corporation telecommunication carriers rely on CRS-1 for POTS, mobile phone, video, cable, and Internet feeds. There are a whole whack of countries that can run just one CRS-1 for the entire country. Cisco's CSR-1 telecom customers aren't public, but the majority of them are household names. There's a very good chance that your voice or video touches QNX every day.
People like my Mom don't care--she can barely operate a cell phone, let alone wonder about all the miles of cable, crates of equipment, and crowded server farms that make the modern world work. But for someone who's even only a part-time geek like me, it's pretty cool to find out what makes things tick. Especially when it's ticking to the beat of your company's software.
It was pretty amazing to learn that IOS XR, Cisco's ultra reliable OS, lets them run 92 Terabits per second and uses hundreds of parallel processors is based on a straight, unmodified QNX kernel and QNX Transparent Distributed Processing. Now a Terabit is a lot, and 92 Tbps is hard for a mere mortal like me to understand, so I converted it into some more meaningful measurements. That's equivalent to squirting out 2,600 DVDs per second, or 18,000 CDs per second. Wow.
CRS-1 and QNX were responsible for delivering North American coverage of the Beijing 2008 Olympic Games. The High Def video feeds were all being pumped over the Pacific; HD video in real-time like this was a first. The video went through CRS-1 equipment first highly compressed, had local advertising inserted for each market, then re-integrated with the real-time video feed and shipped back out to their individual markets. With a high of 107 million people in just North America watching the games, that's a lot of pressure for failproof, glitch-free, real-time delivery.
Dozens of world-wide mega-corporation telecommunication carriers rely on CRS-1 for POTS, mobile phone, video, cable, and Internet feeds. There are a whole whack of countries that can run just one CRS-1 for the entire country. Cisco's CSR-1 telecom customers aren't public, but the majority of them are household names. There's a very good chance that your voice or video touches QNX every day.
People like my Mom don't care--she can barely operate a cell phone, let alone wonder about all the miles of cable, crates of equipment, and crowded server farms that make the modern world work. But for someone who's even only a part-time geek like me, it's pretty cool to find out what makes things tick. Especially when it's ticking to the beat of your company's software.
Wednesday, October 7, 2009
QNX CAR wins Adobe MAX: Dollars, Developers, and Drivers.
News flash--QNX CAR won the Adobe MAX award for the mobile category! Of course, this is wonderful news to everyone in QNX. But what does it really mean?
You can interpret this in several ways:
You can interpret this in several ways:
- QNX is creating technology for the car that competes with the best offerings for mobile phones.
- The automobile is extending a person's ubiquitous connection to the cloud, just like smartphones have.
- Unlike GenIVI, QNX CAR is actually delivering on the promise of a standardized automotive platform by introducing applications from many different disperate companies.
- Adobe Flash has become a legitimate force within the automotive electronics environment.
- QNX is brushing off the stodgy image of the automotive market as being a slow moving and non-innovative in electronics.
- The QNX CAR-based automotive infotainment system is becoming a new avenue to attract the best in application developer talent.
Monday, October 5, 2009
The NVidia GPU Technology Conference, or Graphics Guys Know How to Have Fun
If you walked into the lobby of the Fairmont San Jose hotel this last week, you'd be confronted with this machine and a video of it in use. It was created by Adam and Jamie of Mythbuster's fame for Nvidia, and it illustrates paralellism quite dramatically. It's a mega paintball cannon that paints the Mona Lisa in one single shot. Think ink jet printer on a big scale. As far as geek stuff goes, this is pretty darn cool.
That's probably a good word for the whole conference. As far as tech conferences go, the Nvidia conference was pretty-darn-cool. The key note speaker on the last day was the CTO of Industrial Light and Magic, with follow on detailed talk by one of their key geeks. ILM are the guys who do CG work on practically every film released these days. They gave a big presentation including lots of full screen video, showing exactly how they use parallel GPUs to save thousands of hours of their rendering farms. They demoed pouring absinthe and worked that into the talk in a non-awkward way (I was impressed). They showed us parts of movies that hadn't been released yet. Those parts will appear as blacked out in the youtube video, and the exclusivity was flattering. Finally, they couldn't resist doing some audience participation--making everyone in the audience run a cellular automata program to demonstrate wave and point sources and their parallelism. Talk about taking Life to the next level. Using humans to simulate a cellular automata is something that even the most stuffy people around me seemed to find amusing. If you ever get the chance for an ILM presented key note speech, my advice is--don't you dare miss it.
That's probably a good word for the whole conference. As far as tech conferences go, the Nvidia conference was pretty-darn-cool. The key note speaker on the last day was the CTO of Industrial Light and Magic, with follow on detailed talk by one of their key geeks. ILM are the guys who do CG work on practically every film released these days. They gave a big presentation including lots of full screen video, showing exactly how they use parallel GPUs to save thousands of hours of their rendering farms. They demoed pouring absinthe and worked that into the talk in a non-awkward way (I was impressed). They showed us parts of movies that hadn't been released yet. Those parts will appear as blacked out in the youtube video, and the exclusivity was flattering. Finally, they couldn't resist doing some audience participation--making everyone in the audience run a cellular automata program to demonstrate wave and point sources and their parallelism. Talk about taking Life to the next level. Using humans to simulate a cellular automata is something that even the most stuffy people around me seemed to find amusing. If you ever get the chance for an ILM presented key note speech, my advice is--don't you dare miss it.
All in all, a very interesting and fun conference to go to if you ever get the chance. My presentation had a lot less explosions than your average GPU session since I was talking about automotive graphics. Avoiding explosions is usually pretty fundamental in automotive. Instead, I explained the QNX composition manager. How it uses the OpenKODE API to merge all kinds of graphics output into one seamless display, and how we work with Nvidia to enable those types of applications. Yeah, no explosions. Given that the sessions around me were showing how to create particle simulations to simulate flames or volumetric simulations of fluids, I certainly felt a little jealous at their cool presentations. I'd like to think that the people who attended my session learned about the cutting edge of in-vehicle graphics, even if it wasn't on fire. Or being bombarded by paint balls.
Now that I know the competition, I'll make sure to up my game if I talk again next year.
Subscribe to:
Posts (Atom)