Ever discovered that a word you’ve been hearing forever turns out to mean something different from what you thought?
‘Tosspot’ I always assumed was an alternative for ‘tosser’. It means ‘drunkard’.
‘Enormity’ doesn't mean ‘enormousness’. It can, but specifically regarding atrociousness, immoderation or extreme wickedness.
‘Dastardly’ can mean underhanded, wicked or cruel, but only secondarily. Its primary meaning is ‘cowardly’.
More fool me.
Thursday, September 26, 2013
Thursday, September 19, 2013
The Gravy Train and the Great Onion-off
Grossly simplified, here’s the process for cooking a British
Indian Restaurant curry dish. Having invested heavily in pre-prep, you can
follow the five steps below to serve pretty much any dish within ten minutes of it
being ordered:
- Warm some seasoned oil in a wok
- Add whichever spices are unique to the dish
- Add pre-cooked meat and/or vegetables
- Add gravy
- Reduce
The spices unique to any particular curry are relatively
few; it’s the gravy that carries the body of the flavor, and it’s common to
every dish. Madras, korma, Bombay aloo… you name it; apart from the rice, onion
bhajis and naan bread, that gravy makes up the bulk of pretty much anything
you’ll eat. Because it takes hours to make, and gets used up at around 300mL
for each and every curry serving, it pays to make it in bulk. My last batch made
nearly ten liters, sufficient for 26 curries, and lasted me two months.
But therein lies the problem. I’m still experimenting with
how to cook each of the curry dishes, and if I’m tweaking the unique spices
that make, say, a madras a madras, but doing nothing to experiment with the
gravy, then the tests I’m running could be fairly inconsequential. I need to
optimize the gravy, and that means cooking it in smaller batches. And with
nearly 20 ingredients used in its preparation (onions, carrots, red and green
peppers, cilantro (coriander), tomatoes, garlic, ginger, vegetable oil, water,
salt, coriander (powder), cumin, asafetida, curry powder, fenugreek, turmeric,
sugar, and sometimes more), there’s a crazy amount of variables to experiment with.
Fortunately, there’s an obvious starting point. As Voigt
describes it:
“95% of the base gravy is onions. If you’re ever wondered what gives BIR curry that slightly sweet gravy – it’s onions. If you’ve ever wondered what gives the curry sauce its thickness – it’s onions. If you’ve ever wondered what gave BIR curry that taste – it’s onions.”
So, today’s first step: what kind of onions should I be
using? I’m told that the choice of the UK restaurants is Dutch onions, but
since they don’t seem to be available here in SF, today I’m just toying with
what’s available, splitting the usual 10L batch four ways and pitting off white, yellow, sweet and red onions.
Five hours later, and I’ve got four saucepans of brown
gravy. The white onion gravy perhaps has a hint of French onion soup to it; the
red onion gravy has a hint of… how would you describe red onions? The yellow
onion and sweet onion gravies are pretty similar, the former being
(to my surprise) a slightly stronger flavor than the latter.
Bottom line: they all taste like gravy. I’ll have to do
a split test when I make my next curries out of them, and perhaps next time I’ll
experiment with onion sizes and cooking temperatures/times. But today I don’t feel like I’ve made any great revelations.
And my apartment really smells of
onions.
Friday, August 23, 2013
Ballmer’s retirement a Pyrrhic victory for Microsoft investors
Microsoft investors were jubilant today as the stock rallied
(7.5% at time of writing) on the news that Steve Ballmer will be retiring
within a year. I think they’re getting a little ahead of themselves: while
Ballmer’s failings have been well chronicled, I think the bigger picture’s
getting overlooked.
The small picture is that the man presided over a litany of
strategic disasters like Windows Vista, 8, RT and Phone; the Surface surplus;
the doomed Zune and Kin; and the monetary black hole that is their Online
Services Division. Throw out the guy ultimately responsible, the sentiment
presumably goes, and look forward to a return to the good old days.
But the bigger picture is that Steve inherited a company
that literally set the agenda for the whole tech industry, and now bequeaths
one that struggles to stay relevant. The forces that kept Gates’ company strong
through the PC age – being able to destroy competition through existing market
dominance – are the same forces that are now keeping Ballmer’s company weak in
this new post-PC age, and it would be fanciful to think that a change at the
helm has much hope of reversing that.
Gates’ Microsoft was a company that achieved monopoly status
over a modern necessity, and wielded its power ruthlessly (even illegally) to
crush competition and expand into new market segments. The MO that got them
where they were went something like this: find a third party’s software product
that looked to be gaining success (Lotus 123 or WordPerfect, let’s say), and throw
money at a competing product (Excel or Word) until the incumbent could no
longer compete. The odds were always stacked in Microsoft’s favor since the
Windows tax enabled them to sustain loss-making products longer than their
competitors could survive price competition. Meanwhile the Windows tax itself
was protected via threats of punitive license pricing to OEMs who dared to sell
non-Windows PCs.
The strategy worked enormously well for the company. While the crushing of Netscape Navigator by IE landed Microsoft in legal hot water, the eventual settlement, negotiated down from a breakup of the company to a legal slap on the wrist, emboldened the new CEO, Ballmer, to continue on the well-trodden path.
Over a decade of me-too-ism followed. Google’s success begat
Bing, the PlayStation begat the Xbox, the iPod begat Zune, and so on it goes.
To Ballmer’s surprise, no doubt, leveraging the Windows monopoly wasn’t enough
to make these products a success. Bing never got close to Google’s market
share; the Xbox, though now stable, suffered year after year of ten-digit
losses; while the Zune was finally thrown away after the whole portable music
player segment entered terminal decline.
Moreover, when the company’s culture is to embrace, extend
and extinguish their competitors’ offerings, it’s probably inevitable that
vision would be the one trait you’d expect to be absent.
In a classic case of the innovator’s
dilemma, Ballmer chronically neglected to innovate in segments that could
disrupt Windows’ monopoly (notably, cloud computing, smartphones and tablets).
Awakened by the success of Apple and Google, it was already too late: throwing
your weight around in the style of Microsoft-of-old just isn’t going to work
when you’re playing catch-up.
And let’s face it, having seen how Microsoft behaves once
they build a dominant platform, there’s not going to be a whole lot of goodwill
towards the erstwhile tyrant of the tech industry in their struggle to conquer
the mobile segment. Wounded despots don’t tend to get a whole lot of love from
their subjects. They get pitchforked.
For sure, Microsoft’s best days are behind them. Steve’s squandering
of their potential chances in mobile has denied them the position to ever again
wield the kind of power they had when he took over. And for that I think we all
owe the man our deep appreciation.
As for today’s share price spike, look at it this way:
through its activity as a patent troll, Microsoft’s future revenues seem closer
tied to the success of Android than to Windows Phone. If you’re looking for a
recent precedent, may I suggest SCO as a case study?
Thursday, August 8, 2013
Breaking the habit of a lifetime
Of everything I miss about England (friends and family
excepted) good Indian food has long been at the top of the list. Back in
London, it was a habit of mine to order takeout pretty much every Friday night,
and save enough leftovers for ‘breakfast of the gods’ the next morning. But
here in the Bay Area, the state of Indian cuisine is pretty depressing. My
favorite meat dish, the chicken dhansak, is virtually impossible to come by;
the onion bhaji (by far the most popular appetizer in any British Indian restaurant)
I have literally never seen on a menu here; and even of the dishes that are
available, that magical curry house taste just isn’t there.
It took a few years for me to act, but at the end of last
year I could take it no more.
I finally kicked the habit of a lifetime – that of never,
ever, ever cooking for myself – and set out on a mission to recreate the
flavors I missed so badly.
First, let’s clarify that mission. What I’m trying to
recreate is a style of cooking for which the usual label ‘Indian’ isn’t really
suitable. The more accurate term would be ‘British Indian Restaurant’ cuisine,
or ‘BIR’, a label that seems to have stuck among its fans. Not only is it
clearly different from the traditional family cooking of Indian culture, it’s
also increasingly not actually Indian, its recipes being shaped in large part
by Pakistani and Bangladeshi influences.
In a nutshell, since the 1950s, Britain has done to curries what
America did to pizza. We took a style of cuisine we liked, adapted it over
decades to suit local tastes, and innovated around the theme to produce an
authentic new style of cooking that stands on its own and became the new
national cuisine.
As a Chicago deep-dish pizza is to ‘Italian’, so is a
chicken tikka masala to ‘Indian’. The many people who tell me they’d expect the
Bay Area’s curries to be pretty good on account of the migration of tech
workers from India are kinda missing the point. There are sizeable Japanese
populations in parts of South America, but I doubt you’d find many entertaining
knife-juggling teppanyaki chefs there.
(These are terrible examples by the way. I’m really not a
huge fan of Japanese-cooking-as-theater, nor, ironically, of the chicken tikka
masala, but you get the point).
To my mind, once a style of cuisine has reached a critical mass of local
acceptance, there’s a baseline of quality that takes root in the public consciousness (and also hegemony of spelling, though BIR isn’t quite there). A
poor quality curry house in Britain simply wouldn’t last long: locals are too
familiar with the baseline to stomach anything that falls below it.
But here in the Bay Area – and dare I say the rest of America? – Indian food
remains niche enough that a ‘common knowledge’ of baseline quality just isn’t there. And hence my absurdly ambitious goal:
I aim to cook mediocre
British Indian restaurant food
…because, and please forgive my arrogance, if I can reach
the level of ‘mediocre’ BIR cuisine, I’ll have exceeded the level of any of the
Indian restaurants I’d be dependent on locally.
(And, once that’s achieved, I then aim to cook some truly
great British Indian restaurant food. Gotta have a stretch goal).
9 months in…
Trying to solve the riddle of BIR cuisine seemed tough at
first. Anyone who’s ever tried following recipes from an Indian cookery book,
or bought ready-made Indian meals or sauces from supermarkets, knows that they
taste absolutely nothing like restaurant food. Unlocking the secret to those
magical tastes meant finding instructions from people who’ve actually worked at
restaurants.
One of my earliest references was Dave Loyden’s excellent Undercover Curry, a write-up of the
lessons learned by a curry fan so dedicated that he ‘quit his day job and
invested three years working undercover in UK curryhouses’, as the back cover
tells it. Dave’s book was an excellent starting point, but as a total newcomer
to cooking I needed a lot more reference.
I discovered the work of Julian Voigt, a man who used
Dave’s book as a starting point (if I understood correctly), trained himself as
a curry chef and opened his own takeaway. His ebook, The
Secret to that Takeaway Curry Taste (new edition due soon), is a great
complement to Undercover Curry, and best of all, his YouTube videos are a great
learning resource: text recipes just can’t compare to the expressive power of
video demonstrations.
(A shout out also to Mick Crawford’s ‘British Indian
Restaurant Style Cooking’, which had some good nuggets of useful
information that the others were missing. And there’s one or two other good
references too, which if you’re curious, get in touch).
So, nine months into this little endeavor, here’s where I
stand:
- Pilau rice – generally good
- Chicken madras – usually bad and wildly unpredictable
- Chicken dhansak – reliably good
- Chicken korma – reliably great (I’ve really nailed this one!)
- Chicken tikka masala – bad (I think Dave Loyden’s recipe was a cruel joke. ‘Half a can of fruit cocktail’ is plausible at least, but ‘a third of a pan of ghee’ is not!). Need to try someone else’s recipe, but I was never a CTM eater in the first place, so I have little reference.
- Chicken dupiaza – once great (best dish I ever made), but I was never able to reproduce that greatness. Usually pretty good though.
- Sag aloo – reliably decent but never great
- Bombay aloo – reliably good
- Brinjal bhaji – usually good but somewhat unpredictable
- Mushroom bhaji – always at least decent
- Onion bhaji – sometimes great, sometimes poor. Usually good, but I make too many that just crumble apart, and I’m still learning how to fry them just right.
One thing I really need to work on is my pre-cooked chicken.
One of the things BIR restaurants really get right is the
marinade (while restaurants here in the Bay Area seem to not bother at all) so
I’ve been paying particular attention, but I just can’t get the flavor right.
My texture’s a little off too: aiming for juicy and tender, but too often
achieving rubbery and powdery.
![]() |
Bombay aloo |
![]() |
Brinjal bhaji |
![]() |
Chicken dhansak |
Tuesday, June 18, 2013
Just don't take stuff away from people
Alas, I never got to experience the taste of New Coke back in the eighties. And with just 12 signatures on an online petition to bring it back, it's likely I never will. As famous as the outcry against it was, the stories I read tell me that it wasn't actually an unpleasant drink: the bad taste in people's mouths had to do with their being deprived of the drink they'd loved for decades more than anything else. To quote an excellent Radiolab episode, 'loss hurts twice as much as gain feels good'. New Coke would had to have been pretty damn stunning to stand a chance.
Part of the opportunity of creating new brands, I've always thought (a new brand being, say, Tab, as opposed to a variation on an existing brand, like New Coke), is that it pretty much gives you license to put whatever random restrictions you like on a product or service, excused by 'this is a new thing. It works differently from other things, but that gives it some great benefits'.
Consider, as an example, the shopping model for furniture stores since time immemorial:
Now consider this: if you were a furniture retailer, already established by the time IKEA comes along, there is no way in hell that you can tell your customers that they're going to have to pick, carry and assemble their furniture themselves from now on. IKEA got away with it, and became phenomenally successful, because they were able to communicate to customers that theirs was a radically different shopping model. One that had clear drawbacks, but also great advantages. The only way you could compete with them would be to create a new brand, with the same up-front message on how the shopping model works. To do otherwise would be to lose the customers you already had. Why? Because they already associate your brand with convenience and white-gloves treatment, and you know that:
Which brings us to the Xbox One PR clusterfail. Apologists for the company will tell you that Microsoft are simply ahead of the curve on the DRM issue, and innovating their way to a solution for two very real, and very serious problems: one, the way that developers, publishers and console manufacturers have been cut out of the profits made when games are re-sold at retail, and, two, the inherent inability to prevent piracy of games where there isn't a central system verifying entitlements.
As with anything else, when there's an abundance, no-one rocks the boat: it's when resources start to get scarce that fights break out. In the gaming world, resources have been getting squeezed for years now. I could write pages on why, but I'll summarize instead:
So, kudos to Microsoft for firing the first shots in the war to reclaim profits? Well, put it this way: do you remember Qwikster? It's an eerily similar tale: Netflix could see which way the future was going (all content will be streamed, discs to become obsolete in a decade or so), and tried to get ahead of the game by going all-in with streaming, isolating the disc-based side of their business into a separate product (presumably so they could offload it to some poor sap before its money well dries up. Wouldn't you have loved to have been a fly on the wall for those negotiations?). But in blazing a trail to the future, they'd ignored the fact that the new bifurcated service would make things so much worse for their users: customers who used streaming and DVD-by-mail would now have two separate logins, on two different websites (how do you spell Qwikster again?), each of which having an independent billing configuration, and neither (presumably) would be sharing your viewing tastes with the other to make recommendations.
Fortunately, Netflix was able to pull the plug on the venture before rolling it out. The outcry was huge. And the one big message:
Customers don't forgive you when you take stuff away from them.
The real folly for Microsoft is that they knew as well as everyone else that disc-based gaming is going to be gone in a few years anyway, and had they waited just one generation for that to happen, they could have avoided alienating their entire customer base and handing Sony an easy victory.
Furthermore, they really could have had their cake and eaten it too. Suppose their E3 presser had gone like this instead: announce two products, an 'Xbox One' which competes head-on with the PS4 and maintains the status quo on DRM issues, and an 'Xbox On' which is the same hardware, minus optical drive. The Xbox On would be sold for less (and can be, since there's no optical drive in the bill of materials), and its games could be cheaper too (also without harming profits, since less value is lost through resale and piracy).
Customers get a clear choice. They can maintain the status quo if that's what they prefer (or, like many, have no choice), or they can move to the new system willingly (and, indeed, are incentivized to do so). Microsoft loses nothing. Everybody wins.
(Of course, I'm considering only the DRM issue here. Other factors, such as the live-TV-centered design and higher price point are separate mistakes).
But, it is not to be. The opening salvos in the next-gen console war have been fired, and Microsoft's shots were aimed, surprisingly, at their own feet. But maybe not so surprising: there's a clear pattern to their recent MO: force all their customers onto the same products; products which lack the variation that customers need to fit their circumstances, and for which the design priorities are aligned entirely with Microsoft's strategic interests, as opposed to the needs of their customers.
Kinda like with Windows 8. You're a desktop user? Don't want to use an interface designed for touch on your 27" display? Well too bad, we're trying to condition you to feel more familiar with our new smartphone OS.
That blinkered vision -- that customers would actually want to buy the smartphone variant of the OS whose new interface had been sapping their productivity ever since it was forced onto their desktops -- is the same kind of naïve wishful thinking as the belief that they can take away the freedoms that gamers have long enjoyed with discs. The outcry was entirely predictable; what's harder to guess is whether or not they'll capitulate.
Part of the opportunity of creating new brands, I've always thought (a new brand being, say, Tab, as opposed to a variation on an existing brand, like New Coke), is that it pretty much gives you license to put whatever random restrictions you like on a product or service, excused by 'this is a new thing. It works differently from other things, but that gives it some great benefits'.
Consider, as an example, the shopping model for furniture stores since time immemorial:
- Customers visit the showroom, choose the items they like.
- Delivery people show up at their homes, and, with full white-gloves treatment, install the items wherever they belong.
- Find a friend with a big heart, strong shoulders, and nothing to do on a weekend. Ideally he owns a huge car or small truck.
- (Stop by a U-Haul if he doesn't)
- Drive him to IKEA. Collectively pull heavy boxes from a shelf in a giant warehouse and cart them over to the checkout.
- Lug the boxes into the vehicle, drive home, and try assemble the new furniture, from pieces, using your own tools and a cartoon that dare not use a single word of English to clarify the instructions.
- Buy him dinner.
Now consider this: if you were a furniture retailer, already established by the time IKEA comes along, there is no way in hell that you can tell your customers that they're going to have to pick, carry and assemble their furniture themselves from now on. IKEA got away with it, and became phenomenally successful, because they were able to communicate to customers that theirs was a radically different shopping model. One that had clear drawbacks, but also great advantages. The only way you could compete with them would be to create a new brand, with the same up-front message on how the shopping model works. To do otherwise would be to lose the customers you already had. Why? Because they already associate your brand with convenience and white-gloves treatment, and you know that:
Customers don't forgive you when you take stuff away from them.
Which brings us to the Xbox One PR clusterfail. Apologists for the company will tell you that Microsoft are simply ahead of the curve on the DRM issue, and innovating their way to a solution for two very real, and very serious problems: one, the way that developers, publishers and console manufacturers have been cut out of the profits made when games are re-sold at retail, and, two, the inherent inability to prevent piracy of games where there isn't a central system verifying entitlements.
As with anything else, when there's an abundance, no-one rocks the boat: it's when resources start to get scarce that fights break out. In the gaming world, resources have been getting squeezed for years now. I could write pages on why, but I'll summarize instead:
- Gaming has hit the mass market already. There isn't the opportunity for the kinds of exponential growth that drove profits up until around the PS2 days.
- Gaming hardware gets a free ride on Moore's Law. Software isn't so lucky. Customers demand games that take full advantage of what the hardware can offer, which means that, just to keep up, some poor art team has to model the difference between an enemy getting shot in the stomach vs getting shot in the kneecaps (yeah, I don't like it either).
- That art team is already the outsourced replacement for what used to be in-house development. The costs don't have much room to shrink.
- Smartphone gaming has stolen attention from console gaming, and driven the perceived value of games way down.
- The mid-sized developers have been dropping like flies throughout the 2000s. This gave the larger players less competition and hence room to expand in spite of the squeeze, but the corpses of former competitors are not a renewable resource.
So, kudos to Microsoft for firing the first shots in the war to reclaim profits? Well, put it this way: do you remember Qwikster? It's an eerily similar tale: Netflix could see which way the future was going (all content will be streamed, discs to become obsolete in a decade or so), and tried to get ahead of the game by going all-in with streaming, isolating the disc-based side of their business into a separate product (presumably so they could offload it to some poor sap before its money well dries up. Wouldn't you have loved to have been a fly on the wall for those negotiations?). But in blazing a trail to the future, they'd ignored the fact that the new bifurcated service would make things so much worse for their users: customers who used streaming and DVD-by-mail would now have two separate logins, on two different websites (how do you spell Qwikster again?), each of which having an independent billing configuration, and neither (presumably) would be sharing your viewing tastes with the other to make recommendations.
Fortunately, Netflix was able to pull the plug on the venture before rolling it out. The outcry was huge. And the one big message:
Customers don't forgive you when you take stuff away from them.
The real folly for Microsoft is that they knew as well as everyone else that disc-based gaming is going to be gone in a few years anyway, and had they waited just one generation for that to happen, they could have avoided alienating their entire customer base and handing Sony an easy victory.
Furthermore, they really could have had their cake and eaten it too. Suppose their E3 presser had gone like this instead: announce two products, an 'Xbox One' which competes head-on with the PS4 and maintains the status quo on DRM issues, and an 'Xbox On' which is the same hardware, minus optical drive. The Xbox On would be sold for less (and can be, since there's no optical drive in the bill of materials), and its games could be cheaper too (also without harming profits, since less value is lost through resale and piracy).
Customers get a clear choice. They can maintain the status quo if that's what they prefer (or, like many, have no choice), or they can move to the new system willingly (and, indeed, are incentivized to do so). Microsoft loses nothing. Everybody wins.
(Of course, I'm considering only the DRM issue here. Other factors, such as the live-TV-centered design and higher price point are separate mistakes).
But, it is not to be. The opening salvos in the next-gen console war have been fired, and Microsoft's shots were aimed, surprisingly, at their own feet. But maybe not so surprising: there's a clear pattern to their recent MO: force all their customers onto the same products; products which lack the variation that customers need to fit their circumstances, and for which the design priorities are aligned entirely with Microsoft's strategic interests, as opposed to the needs of their customers.
Kinda like with Windows 8. You're a desktop user? Don't want to use an interface designed for touch on your 27" display? Well too bad, we're trying to condition you to feel more familiar with our new smartphone OS.
That blinkered vision -- that customers would actually want to buy the smartphone variant of the OS whose new interface had been sapping their productivity ever since it was forced onto their desktops -- is the same kind of naïve wishful thinking as the belief that they can take away the freedoms that gamers have long enjoyed with discs. The outcry was entirely predictable; what's harder to guess is whether or not they'll capitulate.
Tuesday, June 11, 2013
That diem wasn't going to carpe itself
I quit my job.
I should say for the record that SCE has been a great group of companies to work for, and there are good reasons why I stayed as long as I did. I have unbounded respect and admiration for some of the people who work there but there were a couple of big factors that made me feel it was time to move on. If you'll indulge me...
It was an exciting time to be a gamer; an age when hardware's modest increments in complexity opened the doors to great leaps in software's capacity for expressivity. As each new generation of gaming platform arrived, game mechanics that would previously have been impossible to implement were bringing us ever more compelling and immersive experiences. As the years rolled by, games companies inched closer and closer to delivering on-screen what previously existed only in gamers' imaginations.
But be careful what you wish for.
With the likes of, say, Commando and Out Run back in the eighties, the lack of realism provided more than enough of a gulf to separate in any gamer's mind the abstract fun of a simple game from any concerns of the horrors of war or the tragedies caused by reckless driving. But in the years since, we've done a pretty damned good job of building a bridge over that gulf, and while others presumably can defend modern games as merely enhancements of the simple concepts developed decades ago, it's an argument I can no longer swallow. Commando was a shoot-em-up. Call of Duty is a murder simulator.
And it's not just shooting games that lose their innocence as realism is cranked up. How advanced would you like your racing game to be? Gran Turismo took a lot of flak for not allowing its vehicles to show damage, while otherwise emphasizing how real its driving simulation was. But I don't think there's an alternative here. Modeling a realistic portrayal of a driver's grizzly death in a spinning fireball of twisted metal doesn't sound like appealing entertainment (the first-person shooter crowd may disagree); yet seeing a car emerge from a head-on collision at 100mph with only superficial scratches is going to seem jarring given how faithful the game's visuals are to real life otherwise. You can't win.
I know I can't take the argument all the way and claim that all genres are ruined by increasing technical sophistication. Sports titles at least don't get grizzly, they just get dull (their ancestors relied on high-speed cartoon physics to make them fun, a trick that 8-bit graphics handily disguised but with photorealism just looks dissonant), while abstract titles like Tetris are fun whatever the implementation.
But, see, Tetris has been done already, and while I'm certain that there are great gameplay concepts still undiscovered, the industry (who can blame it?) is competing to sate the appetite of its customers. And for some time now, the demand has been for violence, the more graphic the better.
At heart, my interests lie in creativity and technology. What excited me about the games industry originally was the opportunity to play a part in shaping new technologies to bring about groundbreaking, exciting new games. But as time progressed, it seems increasingly clear to me that technical advances post the millennium have been more of a curse to gaming entertainment than a blessing, and that the kinds of games the market's now hungry for just aren't the kind that I'd be excited to make.
(But a big tip of the hat to Portal 2, one of the few games in recent years to be fun because of technical excellence, not in spite of it, and for using a first-person shooter engine in a way that didn't get third-persons violently shot).
As time went on, I missed getting my fingers dirty. I'd had a few ideas for personal projects that I wanted to work on, and the more that I missed engineering, the more it seemed like the right thing to do to take a career break and see if those projects had potential.
It's been about half a year now, and though it's taken me a little while to retrain in modern tools and languages (my previous professional experience requiring low-level C and hand-rolled assembler) and in spite of a couple of false starts, there's one concept I've been developing that's been getting a few people excited. It's early days yet, but I look forward to revealing more soon.
Actually, that's old news: I left at the end of last year, I just hadn't been sharing many details about what's been happening with me since. Facebook's not the best forum for rambling essays, and now I have something more suitable, I feel I owe an explanation.
I should say for the record that SCE has been a great group of companies to work for, and there are good reasons why I stayed as long as I did. I have unbounded respect and admiration for some of the people who work there but there were a couple of big factors that made me feel it was time to move on. If you'll indulge me...
The industry isn't what it was
What made me want to enter the games industry in the first place is gaming as it was when I was growing up. They say it's constraints that make creative works interesting, and for gaming of the 1980s and 1990s, I heartily agree. Commanding a small, blocky sprite around a spartan, primary-colored world required a pretty healthy imagination, something that as a young boy I was more than happy to supply.![]() |
What I saw back in the eighties |
![]() |
What I imagined |
It was an exciting time to be a gamer; an age when hardware's modest increments in complexity opened the doors to great leaps in software's capacity for expressivity. As each new generation of gaming platform arrived, game mechanics that would previously have been impossible to implement were bringing us ever more compelling and immersive experiences. As the years rolled by, games companies inched closer and closer to delivering on-screen what previously existed only in gamers' imaginations.
But be careful what you wish for.
With the likes of, say, Commando and Out Run back in the eighties, the lack of realism provided more than enough of a gulf to separate in any gamer's mind the abstract fun of a simple game from any concerns of the horrors of war or the tragedies caused by reckless driving. But in the years since, we've done a pretty damned good job of building a bridge over that gulf, and while others presumably can defend modern games as merely enhancements of the simple concepts developed decades ago, it's an argument I can no longer swallow. Commando was a shoot-em-up. Call of Duty is a murder simulator.
![]() |
What I'm seeing today |
![]() |
What I'm imagining |
And it's not just shooting games that lose their innocence as realism is cranked up. How advanced would you like your racing game to be? Gran Turismo took a lot of flak for not allowing its vehicles to show damage, while otherwise emphasizing how real its driving simulation was. But I don't think there's an alternative here. Modeling a realistic portrayal of a driver's grizzly death in a spinning fireball of twisted metal doesn't sound like appealing entertainment (the first-person shooter crowd may disagree); yet seeing a car emerge from a head-on collision at 100mph with only superficial scratches is going to seem jarring given how faithful the game's visuals are to real life otherwise. You can't win.
I know I can't take the argument all the way and claim that all genres are ruined by increasing technical sophistication. Sports titles at least don't get grizzly, they just get dull (their ancestors relied on high-speed cartoon physics to make them fun, a trick that 8-bit graphics handily disguised but with photorealism just looks dissonant), while abstract titles like Tetris are fun whatever the implementation.
But, see, Tetris has been done already, and while I'm certain that there are great gameplay concepts still undiscovered, the industry (who can blame it?) is competing to sate the appetite of its customers. And for some time now, the demand has been for violence, the more graphic the better.
At heart, my interests lie in creativity and technology. What excited me about the games industry originally was the opportunity to play a part in shaping new technologies to bring about groundbreaking, exciting new games. But as time progressed, it seems increasingly clear to me that technical advances post the millennium have been more of a curse to gaming entertainment than a blessing, and that the kinds of games the market's now hungry for just aren't the kind that I'd be excited to make.
(But a big tip of the hat to Portal 2, one of the few games in recent years to be fun because of technical excellence, not in spite of it, and for using a first-person shooter engine in a way that didn't get third-persons violently shot).
I needed to vent my creativity
Working as a project manager for five years was a great job and left me with many treasured moments to look back on. I had hoped when I took the position that I'd have enough spare time to continue my enjoyment of coding as a hobby, but sadly it didn't work out that way.As time went on, I missed getting my fingers dirty. I'd had a few ideas for personal projects that I wanted to work on, and the more that I missed engineering, the more it seemed like the right thing to do to take a career break and see if those projects had potential.
It's been about half a year now, and though it's taken me a little while to retrain in modern tools and languages (my previous professional experience requiring low-level C and hand-rolled assembler) and in spite of a couple of false starts, there's one concept I've been developing that's been getting a few people excited. It's early days yet, but I look forward to revealing more soon.
Monday, June 10, 2013
You know, for kids!
Microsoft supporters looking for a ray of sunshine amid the storm clouds of Windows 8’s user antipathy and Windows Phone’s trifling market share often cite the Xbox as the one example of how the company has found mass-market success in a new consumer segment. So at a time when Windows itself is under unprecedented threat, and the future of computing looks to be leaving the company behind, today’s Xbox One news from E3 is making me wonder why the company isn’t trying harder not to extinguish one of the few bright spots its future may have held.
Would it be absurd to suggest that there are forces within
Microsoft who have something to gain from bringing the whole company down? Yes.
But if there’s an alternative narrative that explains why Microsoft is repeating
many of the mistakes that Sony made with its PS3 launch, please enlighten me. So
far, the similarities are uncanny. The playbook reads something like this:
1. Lose sight of your
target audience
Sony’s lofty ambition was to launch a set-top box that would
be not just your games console, but a gateway to your whole connected life. In
2005, Ken Kutaragi announced to attendees of the company’s E3 press conference
that the PS3 would feature two HDMI ports, three Ethernet sockets, and memory
card readers for SD, Memory Stick and CompactFlash. Fortunately, much of that
waste was scaled back by the time the machine launched, but the focus on
non-game uses surely cost the company engineering resources that would have
been better spent developing compelling games and gaming technology. More so
than photo viewers, video editors and the like, I would argue.
Microsoft’s Xbox One reveal last month gained notoriety for its
emphasis on being a new TV device rather than a new gaming platform. At a time
when the nature of TV is fundamentally changing, it’s stupefying that the
company is embracing the past of television rather than its future: nothing
says anachronistic like bundling an IR blaster with a next-gen console. And
nothing says that you’re ignoring your target audience like devoting the big
reveal to a completely different market.
2. Get the launch
price very, very wrong
Spectators rightly balked at Kutaragi-san’s suggestion that
fans would want the PS3 so badly that they’d work a second job in order to
afford its launch price. To remind: $499 for the basic model and $599 for a
premium SKU.
That was in 2006, nearly two years before the first iPhone
games hit the marketplace. In the years that passed, the established console gaming
companies stood by and watched the mobile upstarts eat their lunch, leaving
them just scraps of market share, and product prices eroded to the extent that trying
to charge even a few dollars for a game in the highly competitive mobile space
is a kiss of death. Freemium as in beermium is the new normal for mobile. Can
consoles fight the trend?
The tealeaves aren’t looking good. Attendees of Nintendo’s
GDC keynote in 2011 will remember Satoru Iwata’s pleading cry to developers to resist
the currents of price erosion and to double-down on trying to make high-value
games a compelling proposition. It was wishful thinking. Shortly after the
keynote, the company watched sales of its new 3DS console fall off a cliff, and
was forced to reduce its price from $249 to $169, an unprecedented six months
after launch.
Their next console launch, the Wii-U, went arguably worse.
At the time of writing, the machine’s been available for seven months, and
Nintendo are still demanding the original $299 price point. But with weekly
sales charts showing the Wii-U being outsold by the original Wii, now six years
old, it’s looking like an increasingly untenable position.
The good news for Nintendo is that, at least for the 3DS,
sales are looking much healthier since the price drop. So, at a time when the
marketplace has never been so price-sensitive, MS are asking $499 for the One? Good
luck with that.
3. Get Phil Harrison
to tell customers it’s all for the best
If sabotaging one of the few divisions of Microsoft with a
bright future was the aim, stealing the PS3’s launch playbook is the right way
to go about it. And it’s an amusing coincidence that Phil Harrison, Sony’s
apologist for the PS3 at the time of launch, is now reprising that role at
Microsoft.
It’s a fanciful thought that there are forces within that
company who stand to gain from bringing it all down, Hudsucker Proxy-style. But
while it’s a stretch to believe that Microsoft is full of super-smart people
conspiring to fail, it’s also hard to believe that it’s manned by people who
are trying to succeed but are dumb enough to be responsible for their recent
litany of strategic disasters, from alienating Windows users with an unfamiliar
and deeply flawed Windows 8 UX, to taking advantage of gamers with the Xbox One
price, focus and draconian DRM.
If The Hudsucker Proxy is a fitting analog for what’s happening
behind the scenes, Phil’s an unfortunate choice of patsy. To his credit, he’s not
as dumb a person as Norville Barnes was. But to Microsoft’s detriment, he’s
also not as smart.
Subscribe to:
Posts (Atom)