I’ve always been a bit pessimistic when it came to the viability of buying a PS5 for the average gamer. Considering its $499 price tag for the standard, non-digital version coupled with a $70 price for AAA titles, the expense always seemed to be hanging just out of ‘justifiable’ range. Compared to a gaming PC, the price might seem quite low, but when you factor in the price of controllers and the fact that you have to pay to have the right to play online, the costs associated between the two platforms quickly closes, and PC gaming, with its constant supply of discounts and free online access (provided you’re provided), actually becomes cheaper within a year or two.
Aside from costs to the user, the amount of games that are actually worth playing on the PS5 also struck me as a little thin to be worth the purchase. If I were to pick it up, I’d also grab Demon Souls, The Last of Us 2, and something related to Spiderman before never touching the console again, most likely, probably. In short, these titles offer something to me that are exclusive to the PS5 where my PC can’t compete by virtue of not having access to the games (and that’s not even true anymore in relation to Spiderman: MM). It seems like the overall trend of PS5 purchases supports my thoughts regarding longevity with the console.
With data provided by SportsLens, one can see rather clearly that the fervor initially met with the PS5’s release is calming in quick fashion. In the fiscal year of 2020, Sony saw 338.9 million sales of game titles. Those numbers are split between both physical copies and digital downloads, the latter of which made up roughly 70% of all sales, to no surprise at all. In FY 2022, however, those total figures dropped to 264.2 million sales, which shows a consistent downtrend in interest towards the console and its respective titles.
I couldn’t prove to you, definitively, that the core issue with these figures lies in the longevity of the console’s offered games, but consider the Nintendo Switch for a moment: A console that’s been supported for over six years and has access to a long line of exclusives such that Breathe of the Wild, Smash Bros Ultimate, and the newly released Tears of the Kingdom (just to name a few heavy hitters), is also the console that can be bought as a handheld for only $200, and functions on the go instead of being a strictly immobile console. These are attributes that the PS5, in the most generous of terms, can’t readily compete with.
For gamers who have been paying half attention, the aforementioned Tears of the Kingdom, alone, would be reason enough to go for a Switch over a PS5. The console is cheaper, and while the game is stuck at a new normal of $70, the purchase would serve as a buffer into the world of Nintendo games that are pretty much all worth your time by any reasonable standard of game review. They are, by and large, games that were made to be played offline, and with your friends and family, which means that you aren’t heavily tempted to buy into their online subscription model for a whopping 20 bucks per year (vs. the Playstation’s $60 annually).
When looking at the difference in sales from exclusive titles vs. third party creations, one can see a trend of decision making akin to what I outline above: people are buying the PS5 because it holds access to certain exclusives, while ditching it when offered the choice to play other titles on different consoles or the PC. Between FY 2021 and FY 2021, the decrease in sales from Sony exclusives dropped only 2%, while third party titles dropped a whopping 15%, suggesting that most people are only purchasing the console for games that can’t play elsewhere.
Its also inconvenient to note that, despite revamping their subscription model, Sony lost 600,000 in PS Plus users, which suggests that players aren’t just holding off on buying games, their also holding off on using the console at all, or at the very least aren’t impressed with the online selection offered by the PS5.
As much as I love the Playstation legacy and its ability to craft one of a kind experiences, as is exemplified by their longstanding relationship with Naughty Dog, for example, I’m a bit embarrassed to admit that their delivery on the PS5 experience has seemed a tad underwhelming, especially with the likes of Nintendo breathing down their necks. Maybe the second half of this year can change that?
I think a lot of what drives video game companies to release old versions of their game is chalked up to nostalgia when the more obvious and abundant force is the power of a second draft. Runescape classic was ahead of its time as a MUD game displayed visually, and what followed as Runescape 2 was a rather harsh step away from its predecessor. Items that were created en mass via duplication bugs entered into the game economy, stats were transferred over from older accounts, and what was left wasn’t really a fresh start into a new world, but rather a continued journey that inherited many of the flaws from the previous one by virtue of not starting players off with fresh accounts. This is also true for the rather slow and painful push into Runescape 3, which today has so many flaws mixed in with the good that the game deserves an article all for itself just on the tedious nature of trying to define it.
Classic WoW took the approach of “sharding” a method of layering players on the same servers into a sort of sub-server so that high-prioritized areas early on into the game’s lifespan weren’t overrun with questers all trying to do the same XP grab. It didn’t work perfectly, but it did help the players complete their early game in a timely manner, and that serves as an example of developers reworking a system retroactively to better create an experience that’s enjoyable.
The Oldschool Runescape team (which is a re-released version of Runescape 2) started every account fresh, took a democratic methodology to game updates, and outright locked content (such as the construction skill) before ensuring that it was ready for player use without incident (not naming names, but I’m sure nobody wanted to be the victim of another Fally Massacre). Despite being riddled with bots, the early game experience was truly something to behold. Players mining essence for gold, players spamming colored and animated trade messages in the Varrock bank, people struggling to find half decent gear once they hit 40 defence, the whole kazoo. It was awesome.
And like all awesome things, it came to an end. Today, OSRS spends its time pushing the envelope for what players can expect as an experience all the way to max stats. New quests, new game modes, new mini-games, new everything. In addition, the OSRS dev team spends a great deal of its energy theorizing ways to create a healthy sense of longevity in the economy. Item sinks, gold sinks, and that sort of thing. In short, the game is doing well, even though it has its fair share of problems, and even though it’s far different from how it was upon release.
And that leads me back to Runescape 3. Where the OSRS dev team had the foresight to prioritize longevity in their game design, the Runescape 3 team absolutely did not. RS3 devs created the invention skill, which used old, mostly useless items in large quantities as a training method. This brought the price of these items up in very successful fashion and that, ladies and gents, is where the compliments from me will stop.
RS3 suffers from “content bloat”. A sickness that appears in games where the developers fail in thinking ahead on how space is used in their game and, after enough time, end up with a world more akin to a carnival with rides packed together than a natural, seamless space that might exist in reality (or, at least, in the game’s reality.)
Take a walk through RS3 for 5 minutes and you’ll find all kinds of distractions that are at odds with one another. Lore building NPCs that are smashed right next to a bank that’s awkwardly place on the side of a mountain that plays host to a teleport pad that’s placed not 10 seconds away from another teleport pad that’s next to yet another bank which hosts a dock to yee-old-dungeoneering training. This is just one part of the map. In other parts, you’ll find old training methods that are now useless, but take up a good 15% of an area’s mass next to three other large eyesores that no one uses. This is a result of the content bloat: RS3 devs added one of these four things at one point by itself, and that was fine. They then added another thing, and another, and at some point or another they realized that they added so many things that now the map wasn’t a natural looking town or swamp, but just a collection of game-like awkward and unattractive baffoonaries.
And this leads me to my desire; a Runescape 4. Clean up the world, update the training methods for a sense of consistent viability, make old combat equipment useful at specific content, make top tier combat equipment worse at specific content, reset the economy, reset the stats, let ironmen dungeoneer together, release group ironman mode, and change the name of the game from Runescape 4 to Geilinor. Add more item sinks, gold sinks, lower the rate at which resources enter the game, and make it so that non-ironmen players have real reasons to train every skill beyond simple quest requirements.
That’s maybe 3-4 years of work, shouldn’t be too hard.
A part of me feels as though incredible, sweeping changes such as what Riot is proposing for Patch 13.10 are indicative of a business mindset the company acts on that emphasizes novelty over consistency. This is in great contrast to Valve’s methodology of balancing / game development with DOTA 2, which favors an extremely long-form basis of balance and sees heroes and respective items altered very, very slowly with big changes coming once every few years, at the most.
What does all that mean, and what the hell am I talking about?
League of Legends goes through patches just like any other game. These changes consist of character alterations, items changes, and content introduced or removed. And because it’s the biggest esport in the world, League’s patch notes are seen as far more important than just a method of game balance, instead being treated like things that can, and will, upset or help a game with nearly 2 billion dollars of annual revenue. The developers, artists, professionals, and casual players alike all rely on the balance team to do their job in a safe manner so as to not flip the tables on something they all rely on as a job and a hobby.
13.10 is the kind of patch that I like: New items introduced, old items brought back, existing items reworked, champion changes, map changes, the whole 9 yards. That said, it upsets me quite a bit that these changes weren’t introduced during pre-season, when most changes to the game are supposed to take place. To be completely fair to Riot, I don’t think the changes being introduced could have been completely foreseen as helpful or needed during pre-season simply because the problems they fix didn’t exist until pre-season had already passed. That said, it does feel like Riot is holding out on big changes for the sake of creating a manufactured sense of novelty instead of a natural one. That is to say that Riot holds out on sweeping changes for longer periods of time than they need to so that the player base sees any changes at all as the cleanest breath of fresh air they’ve ever had the pleasure of sucking up.
The truth is, I believe Riot would actually stand to gain from being far, far more aggressive early in the year and during preseason more often than they are. I mean why couldn’t Statikk Shyv exist right from the get go this season, huh? To quote a Douglas Robb, “…the reason is you.”
You, the league player, are seen as a product that ripens with a measured amount of exposure to your addiction. Too big of a hit too quickly, and you overdose, leaving behind the dealer with what’s tantamount to pocket change. Think of the effect URF has on casuals and you’ll know exactly what I’m talking about. Too small of a hit, and you find a new dealer outright because you need a better fix. And Riot’s methodology of patch changes follows the guidelines of making sure their player-base gets just enough of a kick to stick around while also being just mildly upset with XYZ aspects of the game, which Riot themselves created, intentionally or otherwise, just so that they can save the day by making the necessary alterations after enough teasing and bam, another successful deal (and you’ll be back for more.)
I’m not anti-business, and I understand money trumps all in a business as big as Riot Games, but as stated, I believe they have more to gain by approaching their patch notes with aggression and fervor moving forward as opposed to the limp wristed half-measures they’ve been exhibiting this past year. And if that sounds too harsh, remember to ask yourself this question: “Why couldn’t Static Shyv have been here the whole time?” There’s no reason it couldn’t have. Riot could have nerfed it if it was a problem, but chose to remove a fun item instead. They made that choice, just like they did with Ohmwrecker and Banner of Command.
Riot Games, I beg of you, take the leash off and let League have a wealth of niche, strange items for players to further identify their play-styles in. And for fuck’s sake, don’t remove Statikk Shyv ever again or I’m changing dealers.
Dark Souls III offers something that its kin can’t seem to match up to on a consistent basis, and that something is a level of refinement that can only come around once in a decade (or two). Almost to a fault, Dark Souls III creates a hyper efficient pattern of gameplay loops that follows the structure of: big area for exploration, small enemies, boss blocking next area, repeat.
Its a fun loop to participate in, but one who seeks a more natural approach to video game world design will find the theme park level of tour guiding a let down and, in some cases, an annoyance. Instead of being offered a world to explore and, as in the previous two Dark Souls titles, slowly figure out how to most efficiently traverse on a new account, the player is being strung along a very, very predictable path forwards to progress gameplay. This can be said, somewhat, for Dark Souls II, but for the first of its name, Dark Souls I does a brilliant job in layering the entirety of its world above, below, and beside itself. And, when it wasn’t able to do that efficiently, delayed giving the player to tools needed to warp around it until absolutely necessary for seamless gameplay.
Dark Souls III holds no compunction in tossing the player the tools for teleporting around the map immediately, and tries its best to incorporate said design into a positive instead of working against it. For players who prefer getting straight into the action and delving into the combat, this is an absolute plus. But for the aforementioned players who care more about the world itself, or at least equally as much as the combat, having a world designed to be teleported around instead of walked, fought, and “short-cutted” will immediately tire the mind and bore the senses.
The one true exception to this rule is the fight with Dancer of the Boreal Valley, which, despite being an end game boss fight, is actually available right at the start of the game if the player chooses to commit #ViolenceAgainstOldWomen™. And I, like many cultured men before and after me, will always opt to commit such acts when offered the chance by a dev fellow (or otherwise).
The Dancer showcases the refinement of Dark Souls III on a technical, micro level where the world displays it on the macro. With the Dancer, every single moveset is not only readable on the first attempt, but can always be manipulated by positioning relative the Dancer’s angle against the player. With a mix of physical, dark, magic, and fire damage, there isn’t really any one way to sure up your defenses against the Dancer aside from “gitting gud”. Additionally the moveset of this boss is downright satisfying to admire, with each move representing a free flowing dance (go figure) amidst a duel as apposed to raw brutality. The hitboxes, the move manipulation, the required patience to slowly wear the dancer down, and the visuals are all top tier here, and the because of the refined combat, its all tied together as what is one of, if not THE, best fight in the Dark Souls trilogy.
And, to go back to my original point, beating the Dancer early allows the player to access a late game are early, which is the only real time in the game where the player has the option of taking the unbeaten path at all, unlike in Dark Souls I & II, which allows the player to go every which way immediately, if they have the skill.
The combat in Dark Souls III is also the trilogy’s best. The movement, both when using the lock-on mechanic and when free moving, is completely untethered to an integer system and provides a full 360 degree range of movement for the player, which is sad to call an actual improvement over the former two games, but it absolutely is. The weapon movesets range from basic to outright gorgeous and satisfying to play with, and the range of weapon viability, although obviously tilted in favor of speed, is still varied enough to make multiple playthroughs just for different builds worthwhile. The PvP is about as tight as FromSoft can reasonably be expected to get at this point, and uses to previously mentioned attributes to reign itself in as overall a strong experience with a community still battling out to this day.
Not to sound like a broken record, but the hitboxes exhibited in Dark Souls III are also a breath of fresh air after playing around with DS2 for even a few hours. Each enemy, at times, will swing their weapons just inches away from the player during some roll or another and not once in my gameplay have I ever been tagged where I otherwise would have felt that the weapon should have missed me altogether. Basically, the hitboxes slap (or don’t slap, depending on your perspective / skill. They’re good.)
The vast majority of the game was built from the ground up with parrying in mind which creates an interesting dynamic between the bosses and the player where each fight can absolutely demolished with nothing but a shield and a dagger. In my opinion, parrying fights are extraordinarily interesting in that they feel like duels between the player and a master at arms as apposed to a Monster Hunter-like “dodge n’ roll” hack-fest.
This point in personal, but the more parrying fights in the DS games, the more I seem to like them. And DS3 has some extremely satisfying parryable bosses.
This point has to do with world building, which we’ve already covered, but is important enough to mention all its own: “Firelink Shrine”. Each Dark Souls game has its own world hub. A “center” for the player to return to and find new NPCs, spells, weapons, companions, foes, etc, etc, etc. Dark Souls I built the entire world around the center, so each time it was worth going back to Firelink Shrine the player was likely to return there anyway as they traveled up, down, and across the map.
Dark Souls II and III allowed the player to teleport straight to the hub for the sake of convenience, and the world building suffered for it. The world building strengths of each game can be summarized by looking at the nature of their Firelink Shrines or “centers”. Dark Souls I is not only of paramount importance to the player to interact with NPCs and gain functional lore hints, but also interacts with shortcuts between the rest of the world.
Dark Souls II left most of that behind, but did manage to incorporate a drop from the center into a pivotal part of the game and kept the NPCs around.
Dark Souls III kept the NPCs around, but did nothing to make Firelink Shrine important to the world at all except for the fact that you travel to the final boss from it, which sounds important, but in the grand scheme of Dark Souls games feels more like a smart use of space in the developer’s part than it is important to making Firelink Shrine a well-incorporated asset into the game as apposed to a necessary world hub that was cobbled together by virtue of being tradition.
This is hardly noticeable, and that’s because all that’s noticeable in Dark Souls III its the refinement. Refined movement, refined combat, refined hitboxes, refined boss design, and refined world travel. And what Dark Souls III boasts in these feats, it loses in world design and wholesome traversal. Some of us in the gaming community would call this “soul” for simplicity, although I’d ask you to excuse the unintended pun.
Roguelite, roleplay, and repetition. The Dark Souls Trilogy nails all three. Let’s talk about how and why the games’ strengths have such an addicting effect on those who play them.
First at bat is “Roguelite”: A subgenre of the “Roguelike” video games, which are games like Rogue. That sentence looks disgusting, but I promise everything you need it is in their. Just to be clear, “Rogue” is a classic game that popularized the top-down dungeon-crawling random-looter perma-death kinda’ game. “Roguelite” video games, collectively, fire from the hip when making associations between themselves and Rogue. That is, some Roguelites will have Rogue’s perma-death mechanic, while others will be dungeon crawlers, and some will be random looters. Dark Souls as a trilogy is a very mild mix of all three. For the purposes of this article, let’s focus on Dark Souls 1 as an example.
Dark Souls 1 features Roguelite mechanics in the form of its randomized loot drops, a severely punishing checkpoint system that, thanks to the difficulty featured in the game, approaches the punitive feeling of perma-death in other games, and, when looked at from a certain perspective, can mask its open world as multiple large dungeons to clear. Again, some of these descriptions can be contested, but the point is that Dark Souls very loosely shares some its mechanics with Rogue, even if indirectly.
This serves as the foundation for the game’s addictive nature. Because the game features randomized loot drops, a punitive death system, and the sprawling feel of a large dungeon in need of clearing, bit by bit, every single action taken by the player feels like improvement. Dying might result in a reset to the last checkpoint (bonfire), but some information on the AI might be gained, giving the player an edge the next time around. Or, perhaps, some random loot will be dropped before dying that helps give the player character (PC) stronger weapons or tools to overcome an obstacle. All of this works to give the feeling of tangible improvement even when no ground has been made in-game.
Dark Souls 1 leans into these strengths by making it very clear that a new player should expect to be lost, at a disadvantage, and to die often. The correct direction(s) are often not very clear and the promise of another bonfire after clearing a particularly difficult area is non-existent. In short, these aspects of Dark Souls make each section feel like an entire game on their own, and something to be celebrated after conquering. And that feeling of victory is strong thanks to how brutal the game can be to the uninitiated.
At the start of Dark Souls, you’re asked to pick your starting character. All characters can become more or less the same eventually, but start out very different and will reflect the player’s wishes on what kind of role they wish to pursue. Longtime readers of mine will know that I’m big into the Roleplaying genre, and so it only makes sense to me that being able to roleplay in Dark Souls would serve to enhance whatever strengths the were to be exhibited in the rest of the game.
The benefits of roleplaying in a game that asks you to overcome extremely difficult-to-manage enemies is multi-faceted: The fact that your character is teching into a specific playstyle means that you’ll be stronger at one or two particular forms of offense / defense, making your playstyle easier to navigate and enemies easier to take down as apposed to having a complicated, expensive, and mediocre playstyle that branches out over all of the game’s content. Additionally, this type roleplaying makes each playthrough unique to the one that came before it: a mage build will vary from a melee mage build, which will vary from a melee build, which will vary from a faith-based spell caster build, and so on.
Effectively speaking, this provides Dark Souls with that “itch” every player feels when getting roughly halfway through a playthrough they weren’t prepared for. That “itch” often coming in the form of a thought that resembles something like “Oh my god, if only I had spent my souls a little more effectively, I could get through this part easily!” or “If I had just upgraded this weapon instead of that weapon, my character would be far stronger!”. THAT thought right there is the source of your addiction, believe it or not.
And what do we do when we have that thought? Generally speaking, I’m willing to bet that most of us play for another hour at most before rerolling our character to be slightly (or greatly) improved. After beating it for the first time, that’s certainly how I’d play. And its all thanks to the roleplaying elements in Dark Souls foundations, helping to differentiate one playthrough from the next.
There’s one key aspect of Dark Souls that ties the roguelite and roleplay elements together like the greatest present ever devised: and that’s the game’s repetition. More specifically, the game’s rhythmic gameplay. First and foremost, it goes without saying that Dark Souls has extremely addicting mechanics. The satisfaction one gets from overcoming a boss or difficult foe by making use of the smooth combat system is simply unmatched by 99.99% of games in existence. There’s no contest in that regard.
But more specifically, the rhythmic gameplay is ever-present but ultimately lost on a great many players, which is why the source of the addiction Dark Souls seeps onto its playerbase often remains unseen: everything from walking, running, attacking, dodging, leveling, and blocking function in tandem with the rhythm programmed right into your opponent’s moves. From attack patterns and mobility, to defensive habits and AOE spells, each enemy obeys a rhythm that allows the player to react. This rhythm tends become quicker with the more recent FromSoftware games, but its still a rhythm nonetheless. And this rhythm is the key to Dark Souls’ satisfying repetition of gameplay.
This isn’t to say that the gameplay itself is repetitive. The game features a constant, satisfying rhythm that repeats itself and provides the gameplay, which varies wildly, with enough breathing room to be fun all the way through a 50 hour playthrough (or three).
Roguelite mechanics, roleplaying foundations, and a masterclass in satisfying repetition are all you need to search for when finding the source of your addiction to Dark Souls. The real question after all that is deciding whether or not its worth cutting it out of your life. (Just one more playthrough?)
In today’s world, its an inarguable fact that the niche genres of gaming are no longer being accessed by the die hard fans of old. These days, the FPS fanatics are getting their hands on MOBAs and the casual clickers are delving into the world of Role-playing Games (RPGs). That latter genre is the one I’m most focused on these days, as it seems every other game that comes out today tries to incorporate RPG elements into its foundations. From AAA titles to the small indie passion project, what I find is that at some point the player is asked to manage skill points, profession levels, or, at the very least, stick to a character archetype as defined in the beginning of the game. Very rarely are one of these aspects of role-playing missing from a title, and when they are, its generally due to the fact that these aspects were purposely left out to hone in on the “action” aspect of a particular game, such as a shooter that wants to simplify its gameplay loop.
For the games that do intend on having RPG elements baked into its core, there is a single attribute, or ingredient, that’s key to making the player feel engaged with the process of playing a role: motivation. No shit, right? Most developers, and gamers, think that good mechanics and quality writing create the motivation to engage in a roleplaying game, but this is an illusion. The mechanics compliment certain functions of the game, such as combat. The writing keeps a player engaged in the story and character development. But the motivation to continue engaging with an RPG is down to the way the game forces the player’s hand.
That might sound counterintuitive for a genre that prides itself on player freedom, but, if implemented correctly, the mechanics that force a player’s gameplay will be unseen, non-obstructive, and will encourage the player to stick to the gameplay style or story they’ve crafted for themselves without making them feel like they are missing out on anything.
The Elder Scrolls, For Example
The Elder Scrolls games all have unique attributes and their own take on how to implement this kind of system. Let’s take a look at Morrowind, Oblivion, and Skyrim as examples and see how they strategized motivating the player to roleplay.
Morrowind opted to give the player total freedom in the open world by making sure that enemies, in the vast majority of cases, were not leveled to the character. This means that a level 1 character is going to find the same exact kinds of enemies in the open world and in the majority of dungeons as a level 20 character. This is key for Morrowind because it means that a level 10 character who has teched into a melee focused build won’t mind taking some time out of their playthrough to opt for some magic training to change up their gameplay since the enemies they’ll be facing won’t get any harder due to the increased levels in some side skills.
Since the open world wasn’t the source of motivation for players to stick to their archetypes, Bethesda incorporated the motivation into the faction and house requirements. For those who haven’t played Morrowind, the game features factions and native houses that offer quests to the player, but lock themeslves off at certain requirements: one house might have speechcraft requirements, while a another faction might have strength and melee combat requirements. This means that a player who is focused on one kind of build is likely going to be guided to a relevant house or faction by the game’s NPCs. And because getting to the end of said questlines is almost always going to be in the player’s best interest, they feel motivated to stick with their build to unlock the next set of ranks and quests to progress.
In short, Morrowind opted for a sociopolitical solution to their motivational factor and made the player naturally feel inclined to keep steady and stick with their designated skill set, although it was never a requirement to follow a difficulty curve.
Oblivion, on the other hand, opted for the opposite approach. Instead of motivating the character through subtlety via house and faction requirements, Oblivion removed the quest requirements altogether and just made the enemies in the open world level with the character. So a level 10 character is going to find much stronger enemies in the open world and its associated dungeons than a level 1 character. This means that a character who is teched into melee combat isn’t going to want to step aside for some unnecessary magic training, as the extra levels in skills he or she isn’t likely to use will result in stronger enemies for no gain on the player’s part.
This design opened up the possibilities for specific character archetypes to 100% all of the game’s content without being locked out due to skill requirments in Oblivion, but resulted in a far more punishing leveling system that made use of the difficulty slider (to account for stronger enemies) far more necessary for new players.
Both of these methods have their assets and weaknesses, but both serve to “force” the player’s hand in some way. And those restrictions make the player feel far more invested in using the tools they’ve chosen for a single playthrough as apposed to all the tools possible, which is precisely what roleplaying is all about.
Since I mentioned the weaknesses in brief, a quick note about them: In Morrowind, the unleveled world is very, very forgiving to new players who might not have an optimal build, and will let them become overpowered gods no matter what. This is a strength. When it comes to veterans, however, knowledge of the systems at play mean that leveling every skill at rapid pace isn’t just a viable option, but the strongest option in every play through with no downsides. From a roleplaying perspective, this is a weakness. In Oblivion’s case, the opposite is true: newbies are very likely to have a punishing first playthrough in a leveled world that cares not if you understand how to make a viable build. This is a weakness. For veterans, though, the system is designed to make the playthroughs all individually unique and heavily motivated them to stick to their chosen skill set, thanks to the fact that the world will punish those who deviate too much. This is a strength.
Which version appeals to you more will depend on your taste and which game’s story and mechanics you prefer. That said, from a roleplaying perspective, both methods shown here are miles ahead of Skyrim’s method of motivation, which is no method at all.
You see, Skyrim opted to level its world similar to how Oblivion did, but also advertised itself as a game that allows the player to choose whatever build path they like and deviate as much as they want. A game of “total freedom”. The reality is that the leveled world meant that the total freedom was absolutely not a viable build choice like it was in Morrowind and, thanks to the fact that all of Skyrim’s crafting mskills were necessary to making a strong build viable off of expert difficulty, the player is not only forced into teching into their opted playstyle (not as advertised) but is also forced into using Alchemy, Enchanting, and Smithing to keep their power at reasonable odds with the AI. So that’s three forced skills on top of the combat skills used in a player’s build. Needless to say, this is the absolute wrong way to do an RPG.
Where Skyrim succeeds in pushing the mechanics of the medium, it fails delivering on the roleplaying elements of Oblivion and Morrowind by not motivating the player to actually roleplay in any true sense of the word. And it definitely didn’t succeed in matching Morrowind’s writing or story (though that’s a topic for another article).
For the RPG, limiting the player’s choice isn’t about locking the player off of content or skills, but imploring them to dive head first into their character. In a way, its about limiting what a player wants to do without forcing them to do it. Its a difficult balance to achieve, and while Oblivion and Morrowind managed to find two inventive ways of managing the systems, I await with eagerness at what future developers might theorize into practice for the genre’s future.
I’m going to come right out of the gate with my hands up and say that I have zero personal experience in game development. From the concept stages all the way out to shipping, there isn’t a single place where you could point out a spot where I’d know what to do beyond playing the actual product. That said, there’s a simple truth that any purveyor of passionate gaming can attest to, and that’s the consistently inconsistent nature of contemporary game development.
Now there’s a lot to be said on this topic, and while some of its more contentious facets can be, and will be, debated over the next decade, I’m going to lead with the obvious stuff first. Namely, that regarding studios’ insistence on setting time constraints that aren’t feasible. Starfield is the timely example here: Bethesda’s soon to be (presumably) released title that has been pushed back from November of 2022 all the way to sometime, anytime, in 2023.
There’s absolutely no issue with a studio pushing a date back to make their product better. I’m all for it. What I’m not for is this being a standard that studios lean on to garner early hype for their release, and that’s exactly what AAA devs have begun doing. After Cyberpunk 2077’s abysmal release (which saw great commercial success), the powers at be noticed that you can create an almost artificial stream of hype trains by lying your ass off when it comes to how realistic the buyer’s time expectations could be. That is, when gamers are told a game is to be released in 2022, studios have an interest in breaking that promise to create free advertising through the click-bait sites who will jump at the opportunity to include some SEO titles in their diet.
“Game delayed once again, click here in disappointment!”
And we do. We always click. And we do it because our brains are hardwired to see this news as something that is unexpected, dramatic, and might have some wild story behind it. Are the developers looking out for their clientele by making their product better, or is this delay a story of devs who are in over their heads and are desperate to make contact with some form of the game they promised? The reality today is that this is a false dichotomy: Bethesda Studios, with all of their experience, knows one year in advance whether or not they can have a finished Starfield that they’ve been planning for the last five, developing for three.
The pessimistic nature of this argument might look manufactured or at least tilted at a glance, but once you entertain the concept of a studio listing a release date that they are more than likely to beat then you see how silly it is that nearly every major release features a date that is almost assuredly going to be delayed. In what way would setting a release date that gives developers an extra 6 months to a year beyond “standard measurements” ruin the product? Some might say it would make the product better in most cases, but maybe you think that the devs will just work slower and eat into profits more.
Of course this isn’t an argument genuinely worth considering since studios (or, more accurately, the private holders of the studios) know in advance that they will be delaying their title. But pretending that isn’t the case, there isn’t a reason that these studios can’t set internal product dates to attempt to beat their aforementioned release ahead of time. A title slated to release in December of 2024 that announces a 6 month early release in January of 2024 isn’t one that isn’t worth picking up, its actually one that you can put the most money on as being a well made, completed product.
But again, money is what talks here, and gamers themselves are the ones pouring out extra cash for pre-orders and goodies before playing the fucking game. So I can’t really be too critical on the devs, their studio, or the studio’s shareholders because they are just following the statistics. I’m almost tempted to change the name of this article to “YOUR standards need to change” since it would probably be more accurate, more to the point, but alas, much like the studios referenced here, I realize that accuracy isn’t the best measure for success.
As for the more common talking points, there are the pre-orders, micro-transactions, 70 dollar price tags, and pay to win schemes. It may surprise you to know that I don’t actually have a problem with micro-transactions at all. Nor do I have an issue with 70 dollar price tags. The gaming industry has done a really good job of keeping the cost of owning a AAA title down to 60 dollars for a very, very long time and I see no problem with them trying to find ways of including in-game content to personalize a gamer’s experience if they are willing to pay for it or simply adding 10 dollars to huge releases as a means of keeping up with increasing costs of development. Games are expensive, and games can bomb on the market. When you have a good release, you need to make the most of it and I’m not a person who hates money, so go for it, I say.
As for pay-to-win, there are obviously issues with AAA titles releasing content that gives players advantages after they’ve already shelled out over 50 bucks to own the title, but in the majority of cases these schemes rely on “whales”, or people who are willing to shell out disgusting amounts of cash to get their hits of dopamine. And when I say disgusting, I don’t mean 50 bucks over the course of three years, I mean hundreds of dollars per month or more on a game that is only feasibly playable by spending money. This is a problem, but its more a problem of gambling addiction and a general loss of purpose in life than it is a problem of game development: and these are topics that are far more important and extend beyond the grasp of this article.
When it comes to the infamous pre-order, I am totally against them. Buy micro-transactions, spend 70 bucks on a game, but do not, ever, under any circumstance pre-order a game. In fact, don’t pre-order anything except for food by necessity. Buying a game before its been released, before its been tested, reviewed, and picked apart by casuals and experts alike, is not only a bad investment of time and money for you, but it sets a poor precedent for the studios behind the product.
Obviously this is a point that’s just about fifteen years too late, but the pre-order culture is such a strange phenomenon because we live in a time where you don’t need to stand in line to get the game you want. Your title awaits you via online purchases and downloads and is in infinite supply: you don’t need to save your slot, I promise! And yet, AAA studios manage to suck in the potential customers early by offering small care packages with the purchase. Like selling toys for a franchise you like, but you’d have to buy a ticket to see the movie before getting access to them, and the ticket costs 60 dollars to boot. Like, stop. Stop pre-ordering games. You have ZERO clue if the game will be good, if you will like it, or if you even have the time to play it before finishing the other five games sitting in your library that you bought on sale two months ago. Just stop it, please.
The only way the consumer can punish the developer is with their money, and if you give up your cash early, the developer has zero impetus to release a finished, high quality product instead of just patching it into “reasonable” over time. Just look at the Battlefront remakes: tons of talent behind their production, and terrible quality control on release. EA Games, pre-order everything.
The development culture of today needs to be reminded that beta testing happens before release, not during. It also needs to stop leaning on false release dates and promises to generate artificial hype. And the only way that will happen is if enthusiasts learn to vote with their cash again. So, yeah, that shit is never gunna happen, thanks for reading.
It’s a bit strange to look around the NVIDIA market stalls and AMD hand-me-downs and see prices at MSRP or lower. It was only six months ago that I was writing about how these companies may very well try to keep the outrageous x2 / x3 pricing on their GPUs (I really should just call the relevant hardware “graphics cards”) around so long that they become the new norm but, thankfully, you can pickup an RTX 3070 for under 500 USD. Half a grand for some extra frames in Elden Ring is a dream come true, I know, you don’t need to tell me.
It’s a bit surreal to see normal pricing on what has been a market starved of reasonable price to performance ratios since the Covid pandemic. For some scale, the computer I’m writing this article on was an emergency upgrade after my last PC began to show its age (I was running an FX 8350 on that puppy). For 1,300 USD, I picked up a Ryzen 3600 paired with a GTX 1660 Super and 16GB of RAM. At the time, this was the best I could do, as the year was not 2022. Through and through, this PC rang true, and why I decided to throw in some poorly thought-out rhymes, I have no clue.
Suffice it to say that the performance I got for the price it was listed at makes me cringe today. For that same 1300 USD, I could pick up a Ryzen 5800X3D and at least an RTX 3060. My video editing would be miles smoother and my gaming experiences, well, smoother. The PC would be smoother. Smoothie-like.
The thing is, I don’t regret the purchase at all: it was much needed. All the same, I do stare at the market today and swallow my chagrin as I imagine all of the folks picking up quality builds for far less in price than I had experienced just a couple years ago. All of that is a besides the ultimate point of this article: I believe that GPU improvements are becoming obsolete to the average PC user.
The Improvement of Performance Doesn’t outweigh the Increased Cost of Power
Even if you have zero concern for the limited amount of power humanity has access to, or the effect on the environment that this power usage has, the improvements being made to the GPU today are both enthralling on a technical level but also worrying for those who don’t want their water cooling methods to consist of a full outdoor pool in the dead of winter. That is to say that the amount of power that the RTX 4090 is pulling for standard use is so high that the dissipated heat calls for increasingly powerful cooling solutions, which raises the overall power consumption to boot. If the 1,500 USD price for that GPU alone wasn’t enough to deter you, the electricity bill and extra costs associated with an expensive water cooling solution probably will be.
This is to say nothing of the actual use cases of these new and improved graphics cards. I mean, sure, the 3000 series cards were objectively great to have upon release, and every PC in existence benefited from having one. That said, since the 4000 series have released, I haven’t been able to think of any game, AAA or otherwise, where someone playing could stop and think “Damn it! I only have a 3080 GPU!”
I know this is the old-timer’s trap of believing that PC’s will never need more memory or performance beyond what they already have, but honestly I don’t see the need to have the performance to power improvements be as sharp as they are in today’s world. Most people think that the improvements of today allow developers to accomplish more tomorrow, but the evidence I have supports the idea that the improvements of today allow the developers to act a tad lazier, forever. Why optimize your game properly when your playerbases’ hardware pick up the slack for you, right? Looking at you, Elden Bling.
I realize that none of this matters to the science community or the crypto miners who don’t realize they are throwing away cash at electricity bills, as they do still benefit from being able to have more complicated problems solve faster and faster. But for gamers? For average users? The GPU improvements that we’ll see over the next decade are likely to be way, way more than the vast majority of the market requires. While today you see budget users using graphics solutions for one or two generations past, I predict a future where that gap increases to at least a few generations.
This means that new releases will start to be stricter on number of product, with a slower release of product into the public sphere to maintain a constant cash flow. In some sense, the new releases of GPUs will be like the server-based CPU releases in that they are meant for a niche audience that needs them for something other than personal use. This is all speculation, obviously, but speculation rooted in the likelihood that the energy costs and headache paired with building custom cooling solutions are too much for PC users to bother with once every couple of years.
Cheers and fears, may the earth burn so that my stream can obtain 4k resolutions and, no, I won’t be editing this article as I’m too busy enjoying League’s preseason (Gwen is sleeper OP).
It’s clunky, it’s ugly, it has an ancient combat system, its plagued with bots, and it functions on a tick system that was probably made to allow 500 MHz CPUs to act as servers for its player-base way back in 2001. So why is Oldschool Runescape still the best Massively Multiplayer Online game to ever grace the world of gaming?
When compared to some of the more “contemporary” titles like New World and World of Warcraft, there isn’t really anything on paper that sets OSRS above its peers. The aforementioned problems with the game are decidedly worse forms of game design in terms of what players say they want, all the way down to the tick system. That said, I’d argue that giving players what they say they want is probably not in a studio’s best interest at all. In fact, all of these attributes that OSRS sports is something that I can, and will, argue for as a positive in OSRS’s favor. Additionally, these attributes can be weighed against the game’s peers’ alternate form of game design as a way to figure out what happens when developers opt for the modern approach to MMO creation.
First, I’d like to look at the art style of OSRS. The graphics. The big ol’ ugly looking models. To be perfectly honest, I can see where people come from when they say the graphics are a huge turn off to even trying this game. Its not only simple, but to the untrained eye its entirely ugly as well. Simple little characters running around with simple little animations. Disgusting, right? Well, I’m going to ignore the argument that this art style is in any way “charming”. Not because it isn’t charming, because I do find that its grown on me since my childhood, but because you don’t need to argue in favor of something so subjective to make a strong case for this art style working as an asset of OSRS’s design.
When I look at a modern game, I see a product that’s fighting with itself. Often times, the graphics on display is the product of an art team that has to spend months creating what’s in front of the player, if not years. In OSRS’s case, this is time that’s instead spent mostly on the content of the gameplay itself. I don’t know how long it takes to model a new NPC for an OSRS update, but I can’t imagine its more than a month, at the most. But when I look at all of the different models and animations seen in a game like WoW, I can’t help but feel like the prettiness of it all comes at the cost of resources that could have been spent on making the game actually fun to play.
Don’t get me wrong, both OSRS and WoW can be extremely fun to play for different kinds of people. But I can’t help but feel that the differences between the two is that most people actually enjoying taking part in the content that OSRS has to offer, while WoW players are simply conditioned from a bygone era to grind out every expansion that comes out. That’s an opinion that will divide the room, I realize, but its mine all the same. OSRS has content that is its own goal, while WoW has rewards for completing content that usually can’t stand on its own two legs. This is true when looking at a game like New World as well. Great concept, great artwork, absolutely dreadful delivery that only appealed to those players who get satisfaction out of grinding out goals for their own sake instead of actually enjoying the game. I mean did you even try PvP in New World? Ever? Trashimus Maximus, it was.
Tick Rates and Responsive Design
OSRS functions on a tick system, just like any other online game. The tick rate is the rate at which the server of the game updates the state of the things. A game like CS:GO, for example, functions on a 64 tick rate server. This means that every second, the position of players, grenades, and shots fired are updated 64 times. This is essential in a game like CS:GO where things happen rapidly. OSRS, on the other hand, updates it server once every .6 seconds. So its less than a 2 tick rate server, and yet it has fully functional PvP and high skill ceiling PvE aspects. How so? Well, as it happens, the tick rate of OSRS has created an asset for the game in the form of being able to produce heavily controlled content. The developers are able to accurately assess when a portion of the game becomes too difficult for the allotted skill requirements to partake in it because they know down to every .6 seconds when a player will need to respond to a change in the content. 3-tick fishing, Tri-brid PvP, skilling in general, and high level PvE content are all things that benefit from a low tick rate strictly because the developers are smart enough to plan their content around it. With a game like WoW, I’m not going to argue that a higher tick-rate server is worse, because its clearly not. I just don’t think OSRS’s low tick rate is a negative, I think it’s strictly a positive that offers Jagex devs the opportunity to plan content out in a way that no other devs are able to.
The Combat System
On a surface level, even the most inexperienced gamer can understand OSRS’s combat design. Click the thing, you hit the thing, it hits back, repeat automatically. Simple. The thing is, OSRS also incorporates its prayer system into combat fairly quickly into a player’s journey, which means a player is now clicking on a thing and also a prayer book’s desired effect before letting the combat automate. Still simple.
But, you know, OSRS uses positioning fairly often in medium to high level bossing. So pretty soon, one finds themselves in need of clicking the proper prayers, repositioning, using utility items, and getting their hits in whenever they have .6 seconds, or one tick, to spare. The belabored point being that, despite OSRS’s combat looking rather simple at a glance, it has a huge amount of depth that would put many MMO players to task with becoming even half-decent at.
It used to be the case that you didn’t play OSRS for the combat, but with the community’s ingenuity and the devs’ grand boss design, that’s all but changed as the combat is a huge draw for many players. This is to say nothing of the aforementioned tri-brid PvP aspect of the game you can see above, which has players making use of alternate prayer flicks, utility items, weapons, and armor against other players. This means that a player tri-bridding might be clicking 5 times accurately to swap 5 pieces of gear out, two times for prayer switches, and yet another to switch weapons to something that might be more effective against what they believe their opponent will be switching to, all in under .6 seconds, ideally. Then they do it again. When it comes to this PvP, it takes a special kind of talent to master the art. Its something I honestly believe tri-bridding is something that cannot come naturally to 99.99% of people, it takes hundreds, if not thousands of hours, just to become not terrible in.
The Game Itself
The game itself is unlike any MMO out there today. The quests you’d normally find in an MMO like New World, which are obnoxious fetch quests which blend together boringly, are outstanding experiences in OSRS. The writing is witty, the content in quests are unique, and the rewards aren’t just disgusting amounts of XP required to get your next level, but instead one of kind items or access to new methods of playing that completely alter how convenient it is to play a certain way.
The skilling can be click intensive and difficult, or relaxed and AFK, depending on your mood. And the game is built in such a way that, although you’re playing with thousands of other people, you don’t need to interact with them to make your account stronger. The game has the resources required to be completely self sufficient built right into it. This is why many players opt to play as ‘iron’ men and women, to show that they refuse to trade with other people or get help during bossing. Although you could take part in the thriving in-game economy if you wanted to.
Speaking of in-game economy, the market in OSRS is directly tied to the progression of an account. In short, this means that when you go and level a skill like woodcutting, say, you can sell the logs you get from leveling directly into the in-game market. Using the wealth you’ve gained from that, you can buy fishing bait and a rod to train your fishing. Then, afterward, you can sell the fish and buy weapons to train combat skills and get valuable drops. And so the cycle continues. The in-game economy in most MMO’s, which throws early-game content right out the window, isn’t there to be seen in OSRS. Instead, OSRS’s economy is thriving for a level 3 player in just the same way it is for a maxed player because everyone needs all kinds of resources, and all the time.
OSRS is a great game. Go and play it, even if casually. Other MMOs tend to suck and they bore me. Don’t play them, even if casually.
When I first got into PC gaming, AMD was basically the only company I really cared about. Before I even knew what a dedicated GPU was, I learned that, with AMD, all I needed was a single AM4 socket motherboard and one of their cheap APUs and I’d be set with an entry level gaming rig that would satisfy a newbie PC gamer like myself.
And it did. Hell, I don’t even think I was running AM4 at that time. Back then, I was using an A-10 series APU. And boy, did I use that fuckin’ thing till to its maximum. CS:GO? League? Sure. But what if I told you I would take that puppy and fire up ARMA 2’s DAYZ mod and happily play it at 25 FPS at 720p? Those were the days, I’ll tell you h-what.
These days, I can’t claim to be the most spoiled person in the word in regards to performance. Despite having a stronger knowledge of hardware, I haven’t actually cared to spend the money on any of the latest tech. I run a 1660 Super with a R5 3600 and, as far as I’m concerned, that’s all I need. 4k gaming? No thanks, not needed.
That said, I can stand to make note of the future of hardware’s direction even without direct experience with the stuffs, and I can stand taller yet in understanding that Intel’s ARC lineup of GPUs are, at best, going to act as a 2.5 billion dollar graceless buffer between the dedicated GPU market and Intel’s ironing out the seams of their new product.
NVIDIA and AMD, who have both dug out respectable shares of the dedicated GPU market, have a new graphics card generation (supposedly) coming out in Q4 of this year. These newer cards are going to outdo the current circulation of cards by a considerable, unverified margin. NVIDIA’s 40 series GPUs are being produced on TSMC’s 4nm fabrication process and will retain a 300-400 dollar price point for the 4060 model. In today’s age, that performance is going to be more than enough for AAA gaming at 1080p (and likely 4k), and its going to do it at mid-range prices the likes of which we are only just starting to see again after the Covid-19 related silicon shortage. Meanwhile, we have yet to get a confirmed release date for the ARC graphics cards outside of a Korean laptop launch which played host to a plague of driver issues that have more or less dampened the hype surrounding the cards altogether.
When said cards do get a global release, they’re only just going to be able to compete with NVIDIA’s 30 generation, as demonstrated by a somewhat barebones test released by Intel. This means that NVIDIA will be a generation behind the competition with product that is only just strong enough to compete with a product that’s already been in circulation (and thus, discounted heavily) for more than a year.
Its a real shame, too, since having a third tried and true company to put the heat on AMD and NVIDIA in the graphics department would be a real win for consumers everywhere. Hell, it would probably be a huge win for the world as a whole when you consider the cementing of a stronger Intel fab process, cheaper market prices, alongside stronger consumer and server-based computing performance from all three companies to boot (assuming they can all keep pace).
The barrier between that distant, preferable reality and the one we may be in, which sees Intel eat their losses in full, is nothing short of a few tall cliffs that need climbing. In short, Intel’s ARC GPUs are something I desperately want to stick in the marketplace and pave the way for a consistent showing against NVIDIA and AMD. To do that, they’re going to need to be cheaper than the already discounted RTX and RX cards, strong enough to pass up on the new 40 and 7000 series cards, and also be void of any compatibility issues at global launch.
Basically, the odds are against Intel, but we’d all be better off if they could cause an upset.