Disadvantages Of Binary Options - 847 Words Internet ...

How to build libaom-AV1 to be as fast as possible, compile it with new grain synthesis options, and making rav1e faster

Hello everybody. As you already know, even with tuned settings, encoding in AV1 can be quite slow, so optimizing the current AV1 encoders to be as fast as possible is very useful, as even a 5% speed increase is a very nice improvement over time.
In this post, I will be discussing about 3 things: how to build libaom-AV1 to be faster via compiler optimizations on Ubuntu 20.04+, compiling it with the new grain synthesis options, and making rav1e faster(along with some tricks to increase its efficiency nicely.
Let’s start with compiler optimizations. To compile aom on Ubuntu distros, you will need to install:
To install them all at once on Ubuntu 20.04+, you can just do this to install the dependencies:
sudo apt install cmake git perl yasm nasm python3
Compiling aom itself is quite easy once you know what to do, but since we’re doing compiler optimizations, it’s important to clarify some of the steps, since some of them require going outside of the terminal(or not).
Let’s explain what some of the more involved steps do: the part about git fetch is actually pulling in changes in regards to the new grain synthesis option, and to patch the current build of aom with it, as it has still not been merged with the master.
The CMake part is used to configure what options to pass at compile time. - D_BUILD_SHARED_LIBS=0 makes sure not to compile aom with shared libraries to make sure some of the later options work. The -DCMAKE_BUILD_TYPE=Release makes it so that the -O3 compiler optimizations for C and C++ code are applied. Then, you have the more important compiler flags for CMAKE_C and CXX(C++) : -flto -O3 -march=znver2.
What these do is activate LTO(link time optimization) which links files together and remove some unneeded code because of this, which makes the final executable more efficient, the -O3 option is overkill and is used to make sure O3 optimizations are still applied, and -march optimizations are used to tune the compiler according to a certain CPU architecture, which is Zen 2 in this case(this can provide a 1-2% boost in performance overall, and is usually closer to 1% for video encoders due to have hand written assembly optimizations). You can also just use -march=native if you’re not sharing the binary.
The final C_FLAGS_INIT=”flto=8 -static” is used to specify LTO in the linker flags, and to make sure to build AOM statically.
However, to apply these optimizations correctly., you also need to export some values into LDFLAGS. On Ubuntu 20.04+(and probably older versions of Ubuntu), you need to activate “Viewing hidden files” in your favorite GUI file manager. You will then see a file called “.profile”, which is what you’ll need to copy this line to:
export CFLAGS="-flto -O3 -march=znver2" CXXFLAGS="-flto -O3 -march=znver2" LDFLAGS="-flto -O3 -march=znver2"
The -march flag can be changed to native if you only plan to use this on your machine, or znver2 for Zen 2, znver1 for Zen 1 and Zen+, and skylake for 6th to 10th Gen Intel Core CPUs.
Of course, there are some disadvantages to LTO: compiling will take longer, and will take more RAM, but the benefits are there.
 
Overall, on a 3,8GHz locked Ryzen 7 3700X with 32GB of 3800MHz dual channel RAM on Kubuntu 20.10 on the 5.9 kernel, using the aom-2.0.0-954 build, I get these results for encoding a movie(The Lego Movie) with these settings
--threads=4 --tile-columns=1 --tile-rows=0 --cpu-used=6 --good --end-usage=q --cq-level=20 --arnr-strength=4 --arnr-maxframes=10 --enable-fwd-kf=1 --lag-in-frames=35 --bit-depth=10 --aq-mode=0 --enable-keyframe-filtering=2 --enable-qm=1 --enable-chroma-deltaq=1 --mv-cost-upd-freq=2 --enable-dnl-denoising=0 --denoise-noise-level=6 --disable-trellis-quant=3 
 
Encode with normal release optimizations: 366,3 minutes
Encode with normal release optimizations and -march optimizations: 361,7 minutes
Encode with -march, release optimizations(-O3) and LTO: 342,7 minutes
 
As you can see, all these improvements add up to a 7% speed increase, with most of the speedup being from the LTO optimizations(about 5%). There are other optimizations that could still be done to further increase the speed of AV1 encoders., like 2-pass compiling or PGO featured here: https://old.reddit.com/AV1/comments/i3okaw/how_to_increase_libaoms_speed_by_37/
Now, the main reason to use the new grain synthesis is that it uses a different way to apply the grain synthesis. The original way of doing grain synthesis is that it takes the original stream, denoises it, while analyzing the grain patterns to create a grain table to apply the grain synthesis during decoding, and then the encoder encodes the denoised frame.
This method has 2 disadvantages
  1. It makes the first-pass super slow.
  2. It denoises the image fed to the encoder, which means there can be some lost details.
The new method does mostly the same things, but actually deactivates the external denoiser entirely. This has 3 big advantages:
  1. It is an order of magnitude faster in the 1st pass. It’s actually the same speed as the default behaviour.
  2. Details loss is only present in the encoding process, which means at normal watching bitrates, this results in non-negligible increase in detail retention.
  3. Unlike the normal grain synthesis method, it does not mess much with the rate control, which means a CQ22 file with this method will have about the same file size as a normal CQ22 encode. It does have the side effect that at very high bitrates, this can result in added grain, but the CQ usually needs to be really low for this occur, even with native 10-bit content.
Comparison of all methods: https://slow.pics/c/pm3051Qc
It does have the disadvantage that at very low bitrates, it produces a slightly worse result, but this rarely occurs, so that’s not much of an issue.
To use the new feature in the aom build done above, use –enable-dnl-denoising=0 and the –denoise-noise-level=XX setting you want.
Let’s get onto compiling rav1e, and building it to be faster. This is where it gets a bit harder. You’ll need to install a recent version of Rustup, which will install all the necessary dependencies to compile Rust programs.
To download and install rustup along with the other dependencies, just follow this link: https://www.rust-lang.org/learn/get-started
Or use this command directly: - curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
And follow what it tells you to do. After the installation is finished, logout and log in back, or restart, and type in “rustc –version”. If you have rustc 1.47.0, you’re golden.
Afterwards, download the rav1e git-master(git clone https://github.com/xiph/rav1e.git rav1e) Go into the folder and find a file called “Cargo.toml”. Look for a “profile.release” flag, and change the options to this:
 
[profile.release]
opt-level = 3 lto = true
Go back in the terminal, and you should be able to compile rav1e by following these commands:
This should build the rav1e executable, but not install to the uslocal/bin directory(in my experience) where most manually compiled programs are installed, which is why the last line is included. From all of this, you should have a decently faster rav1e just from doing this.
Bonus for those who want to cross-compile aomenc for Windows on Linux(Ubuntu 20.04+) https://pastebin.com/7P7qr76c
submitted by BlueSwordM to AV1 [link] [comments]

The Challenges of Designing a Modern Skill, Part 3

Okay, Wendy’s or Walgreens or whoever, I don’t care who you are, you’re listening to the rest.

Introduction to Part 3

Welcome back one last time to “The Challenges of Designing a Modern Skill,” a series where we discuss all aspects of skill design and development. In Part 1, we talked about OSRS’s history with skills, and started the lengthy conversation on Skill Design Philosophy, including the concepts of Core, Expansion, and Integration. This latter topic consumed the entirety of Part 2 as well, which covered Rewards and Motivations, Progression, Buyables, as well as Unconstructive Arguments.
Which brings us to today, the final part of our discussion. In this Part 3, we’ll finish up Section 3 – Skill Design Philosophy, then move on to chat about the design and blog process. One last time, this discussion was intended to be a single post, but its length outgrew the post character limit twice. Therefore, it may be important to look at the previous two parts for clarity and context with certain terms. The final product, in its purest, aesthetic, and unbroken form, can be found here.

3-C – Skill Design Philosophy, Continued

3-12 - Balancing

What follows from the discussion about XP and costs, of course, is balancing: the bane of every developer. A company like Riot knows better than anyone that having too many factors to account for makes good balance impossible. Balancing new ideas appropriately is extremely challenging and requires a great respect for current content as discussed in Section 3-5 – Integration. Thankfully, in OSRS we only have three major balancing factors: Profit, XP Rate, and Intensity, and two minor factors: Risk and Leniency. These metrics must amount to some sense of balance (besides Leniency, which as we’ll see is the definition of anti-balance) in order for a piece of content to feel like it’s not breaking the system or rendering all your previous efforts meaningless. It’s also worthy to note that there is usually a skill-specific limit to the numerical values of these metrics. For example, Runecrafting will never receive a training method that grants 200k xp/hr, while for Construction that’s easily on the lower end of the scale.
A basic model works better than words to describe these factors, and therefore, being the phenomenal artist that I am, I have constructed one, which I’ve dubbed “The Guthix Scale.” But I’ll be cruel and use words anyway.
  • Profit: how much you gain from a task, or how much you lose. Gain or loss can include resources, cosmetics, specialized currencies, good old gold pieces, or anything on that line.
  • XP Rate: how fast you gain XP.
  • Intensity: how much effort (click intensity), attention (reaction intensity), and thought (planning intensity) you need to put into the activity to perform it well.
  • Risk: how likely is the loss of your revenue and/or resource investment into the activity. Note that one must be careful with risk, as players are very good at abusing systems intended to encourage higher risk levels to minimize how much they’re actually risking.
  • Leniency: a measure for how imbalanced a piece of content can be before the public and/or Jagex nerfs it. Leniency serves as a simple modulator to help comprehend when the model breaks or bends in unnatural ways, and is usually determined by how enjoyable and abusable an activity is, such that players don’t want to cause an outrage over it. For example, Slayer has a high level of Leniency; people don’t mind that some Slayer tasks grant amazing XP Rates, great Profits, have middling Intensity, and low Risk. On the other hand, Runecrafting has low levels of Leniency; despite low Risk, many Runecrafting activities demand high Intensity for poor XP Rates and middling Profits.
In the end, don’t worry about applying specific numbers during the conceptual phase of your skill design. However, when describing an activity to your reader, it’s always useful if you give approximations, such as “high intensity” or “low risk,” so that they get an idea of the activity’s design goals as well as to guide the actual development of that activity. Don’t comment on the activity’s Leniency though, as that would be pretty pretentious and isn’t for you to determine anyway.

3-13 - Skill Bloat

What do the arts of weaving, tanning, sowing, spinning, pottery, glassmaking, jewellery, engraving, carving, chiselling, carpentry, and even painting have in common? In real life, there’s only so much crossover between these arts, but in Runescape they’re all simply Crafting.
The distinction between what deserves to be its own skill or instead tagged along to a current skill is often arbitrary; this is the great challenge of skill bloat. The fundamental question for many skill concepts is: does this skill have enough depth to stand on its own? The developers of 2006 felt that there was sufficient depth in Construction to make it something separate from Crafting, even if the latter could have covered the former. While there’s often no clean cut between these skills (why does making birdhouses use Crafting instead of Construction?), it is easy to see that Construction has found its own solid niche that would’ve been much too big to act as yet another Expansion of Crafting.
On the other hand, a skill with extremely limited scope and value perhaps should be thrown under the umbrella of a larger skill. Take Firemaking: it’s often asked why it deserves to be its own skill given how limited its uses are. This is one of those ideas that probably should have just been thrown under Crafting or even Woodcutting. But again, the developers who made early Runescape did not battle with the same ideas as the modern player; they simply felt like Firemaking was a good idea for a skill. Similarly, the number of topics that the Magic skill covers is so often broken down in other games, like Morrowind’s separation between Illusion, Conjuration, Alteration, Destruction, Mysticism, Restoration, Enchant, Alchemy (closer to Herblore), and Unarmored (closer to Strength and Defense). Why does Runescape not break Magic into more skills? The answer is simple: Magic was created with a much more limited scope in Runescape, and there has not been enough content in any specific magical category to justify another skill being born. But perhaps your skill concept seeks to address this; maybe your Enchantment skill takes the enchanting aspects of Magic away, expands the idea to include current imbues and newer content, and fully fleshes the idea out such that the Magic skill alone cannot contain it. Somewhat ironically, Magic used to be separated into Good and Evil Magic skills in Runescape Classic, but that is another topic.
So instead of arguments about what could be thrown under another skill’s umbrella, perhaps we should be asking: is there enough substance to this skill concept for it to stand on its own, outside of its current skill categorization? Of course, this leads to a whole other debate about how much content is enough for a skill idea to deserve individuality, but that would get too deep into specifics and is outside the scope of this discussion.

3-14 - Skill Endgame

Runescape has always been a sandbox MMO, but the original Runescape experience was built more or less with a specific endgame in mind: killing players and monsters. Take the Runescape Classic of 2001: you had all your regular combat skills, but even every other skill had an endgame whose goal was helping combat out. Fishing, Firemaking, and Cooking would provide necessary healing. Smithing and Crafting, along with their associated Gathering skill partners, served to gear you up. Combat was the simple endgame and most mechanics existed to serve that end.
However, since those first days, the changing endgame goals of players have promoted a vast expansion of the endgame goals of new content. For example, hitting a 99 in any non-combat skill is an endgame goal in itself for many players, completely separate from that skill’s combat relationship (if any). These goals have increased to aspects like cosmetic collections, pets, maxed stats, all quests completed, all diaries completed, all music tracks unlocked, a wealthy bank, the collection log, boss killcounts, and more. Whereas skills used to have a distinct part of a system that ultimately served combat, we now have a vast variety of endgame goals that a skill can be directed towards. You can even see a growth in this perspective as new skills were released up to 2007: Thieving mainly nets you valuable (or once valuable) items which have extremely flexible uses, and Construction has a strong emphasis on cosmetics for your POH.
So when designing your new skill, contemplate what the endgame of your skill looks like. For example, if you are proposing a Gathering skill, what is the Production skill tie-in, and what is the endgame goal of that Production skill? Maybe your new skill Spelunking has an endgame in gathering rare collectibles that can be shown off in your POH. Maybe your new skill Necromancy functions like a Support skill, giving you followers that help speed along resource gathering, and letting you move faster to the endgame goal of the respective Production skill. Whatever it is, a proper, clear, and unified view of an endgame goal helps a skill feel like it serves a distinct and valuable purpose. Note that this could mean that you require multiple skills to be released simultaneously for each to feed into each other and form an appropriate endgame. In that case, go for it – don’t make it a repeat of RS3’s Divination, a Gathering skill left hanging without the appropriate Production skill partner of Invention for over 2 years.
A good example of a skill with a direct endgame is… most of them. Combat is a well-accepted endgame, and traditionally, most skills are intended to lend a hand in combat whether by supplies or gear. A skill with a poor endgame would be Hunter: Hunter is so scattered in its ultimate endgame goals, trying to touch on small aspects of everything like combat gear, weight reduction, production, niche skilling tools, and food. There’s a very poor sense of identity to Hunter’s endgame, and it doesn’t help that very few of these rewards are actually viable or interesting in the current day. Similarly, while Slayer has a strong endgame goal it is terrible in its methodology, overshadowing other Production skills in their explicit purpose. A better design for Slayer’s endgame would have been to treat it as a secondary Gathering skill, to work almost like a catalyst for other Gathering-Production skill relationships. In this mindset, Slayer is where you gather valuable monster drops, combine it with traditional Gathering resources like ores from Mining, then use a Production skill like Smithing to meld them into the powerful gear that is present today. This would have kept other Gathering and Production skills at the forefront of their specialities, in contrast to today’s situation where Slayer will give fully assembled gear that’s better than anything you could receive from the appropriate skills (barring a few items that need a Production skill to piece together).

3-15 - Alternate Goals

From a game design perspective, skills are so far reaching that it can be tempting to use them to shift major game mechanics to a more favourable position. Construction is an example of this idea in action: Construction was very intentionally designed to be a massive gold sink to help a hyperinflating economy. Everything about it takes gold out of the game, whether through using a sawmill, buying expensive supplies from stores, adding rooms, or a shameless piece of furniture costing 100m that is skinned as, well, 100m on a shameless piece of furniture.
If you’re clever about it, skills are a legitimately good opportunity for such change. Sure, the gold sink is definitely a controversial feature of Construction, but for the most part it’s organic and makes sense; fancy houses and fancy cosmetics are justifiably expensive. It is notable that the controversy over Construction’s gold sink mechanism is probably levied more against the cost of training, rather than the cost of all its wonderful aesthetics. Perhaps that should have been better accounted for in its design phase, but now it is quite set in stone.
To emphasize that previous point: making large scale changes to the game through a new skill can work, but it must feel organic and secondary to the skill’s main purpose. Some people really disliked Warding because they felt it tried too hard to fix real, underlying game issues with mechanics that didn’t thematically fit or were overshadowing the skill’s Core. While this may or may not be true, if your new skill can improve the game’s integrity without sacrificing its own identity, you could avoid this argument entirely. If your skill Regency has a Core of managing global politics, but also happens to serve as a resource sink to help your failing citizens, then you’ve created a strong Core design while simultaneously improving the profitability of Gathering skills.

3-16 - The Combat No-Touch Rule

So, let’s take a moment to examine the great benefits and rationale of RS2’s Evolution of Combat:
This space has been reserved for unintelligible squabbling.
With that over, it’s obvious that the OSRS playerbase is not a big fan of making major changes to the combat system. If there’s anything that defines the OSRS experience, it has to be the janky and abusable combat system that we love. So, in the past 7 years of OSRS, how many times have you heard someone pitch a new combat skill? Practically no one ever has; a new combat skill, no matter how miniscule, would feel obtrusive to most players, and likely would not even receive 25% of votes in a poll. This goes right back to Section 3-5 – Integration, and the importance of preserving the fundamentals of OSRS’s design.
I know that my intention with this discussion was to be as definitive about skill design as possible, and in that spirit I should be delving into the design philosophy specifically behind combat skills, but I simply don’t see the benefit of me trying, and the conversation really doesn’t interest me that much. It goes without saying that as expansive as this discussion is, it does not cover every facet of skill design, which is a limitation both of my capabilities and desire to do so.

3-17 - Aesthetics

I don’t do aesthetics well. I like them, I want them, but I do not understand them; there are others much better equipped to discuss this topic than I. Nonetheless, here we go.
Since the dawn of OSRS, debates over art style and aesthetics have raged across Gielinor. After all, the OSRS Team is filled with modern day artists while OSRS is an ancient game. What were they supposed to do? Keep making dated graphics? Make content with a modernized and easily digestible style? Something in-between?
While many players shouted for more dated graphics, they were approached by an interesting predicament: which dated graphics did they want? We had a great selection present right from the start of OSRS: 2002, 2003, 2004, 2005, 2006, and 2007. People hungry for nostalgia chose the era that they grew up in, leading to frequent requests for older models like the dragon or imp, most of which were denied by Jagex (except the old Mining rock models). But which era was OSRS supposed to follow?
Jagex elected to carve their own path, but not without heavy criticism especially closer to OSRS’s conception. However, they adapted to player requests and have since gone back and fixed many of the blatant early offenders (like the Kingdom of Kourend) and adopted a more consistent flavour, one that generally respects the art style of 2007. Even though it doesn’t always hit the mark, one has to appreciate the OSRS artists for making their best attempt and listening to feedback, and here’s to hoping that their art style examination mentioned in June 2020’s Gazette bears fruit.
But what exactly is the old school art style? There are simple systems by which most players judge it in OSRS, usually by asking questions like, “Would you believe if this existed in 2007?” More informed artists will start pointing out distinct features that permeated most content from back in the day, such as low quality textures, low poly models, low FPS animations, a “low fantasy” or grounded profile that appeals somewhat to realism, reducing cartoonish exaggerations, and keeping within the lore. Compiled with this, music and sound design help that art style come to life; it can be very hard on immersion when these don’t fit. An AGS would sound jarring if its special attack sounded like a weak dagger stab, and having to endure Country Jig while roaming Hosidius suddenly sweeps you off into a different universe.
But coming back to skill design, the art, models, and sound design tend to be some of the last features, mostly because the design phase doesn’t demand such a complete picture of a skill. However, simple concept art and models can vastly improve how a skill concept is communicated and comfort players who are concerned about maintaining that “old school feel.” This will be touched on again later in this discussion under Section 5-2 – Presentation and Beta Testing.

3-18 - Afterword

Now we’ve set down the modern standards for a new skill, but the statements that started this section bear repeating: the formula we’ve established does not automatically make a good or interesting skill, as hard as we might have tried. Once again, harken back to the First Great Irony: that we are trying to inject the modern interpretation of what defines a skill upon a game that was not necessarily built to contain it. Therefore, one could just as easily deny each of the components described above, as popular or unpopular as the act might be, and their opinion could be equally valid and all this effort meaningless. Don’t take these guidelines with such stringency as to disregard all other views.

5-0 - The OSRS Team and the Design Process

If you’ve followed me all the way here, you’re likely A) exhausted and fed up of any conversation concerning new skills, or B) excited, because you’ve just struck an incredible skill idea (or perhaps one that’s always hung around your head) that happens to tick off all the above checkboxes. But unfortunately for you B types, it’s about to get pretty grim, because we’re going to go through every aspect of skill design that’s exterior to the game itself. We’ll be touching on larger topics like democracy, presentation, player mindsets, effort, and resource consumption. It’ll induce a fantastic bout of depression, so don’t get left behind.

5-1 - Designing a Skill

Thus far, Jagex has offered three potential skills to OSRS, each of which has been denied. This gives us the advantage of understanding how the skill design process works behind the scenes and lets us examine some of the issues Jagex has faced with presenting a skill to the players.
The first problem is the “one strike and you’re out” phenomenon. Simply put, players don’t like applying much effort into reading and learning. They’ll look at a developer blog highlighting a new skill idea, and if you’re lucky they’ll even read the whole thing, but how about the second developer blog? The third? Fourth? Even I find it hard to get that far. In general, people don’t like long detail-heavy essays or blogs, which is why I can invoke the ancient proverb “Ban Emily” into this post and it’ll go (almost) completely unnoticed. No matter how many improvements you make between developer blogs, you will quickly lose players with each new iteration. Similarly, developer blogs don’t have the time to talk about skill design philosophy or meta-analyse their ideas – players would get lost far too fast. This is the Second Great Irony of skill design: the more iterations you have of a lengthy idea, the less players will keep up with you.
This was particularly prominent with Warding: Battle Wards were offered in an early developer blog but were quickly cut when Jagex realized how bad the idea was. Yet people would still cite Battle Wards as the reason they voted against Warding, despite the idea having been dropped several blogs before. Similarly, people would often comment that they hated that Warding was being polled multiple times; it felt to them like Jagex was trying to brute-force it into the game. But Warding was only ever polled once, and only after the fourth developer blog - the confusion was drawn from how many times the skill was reiterated and from the length of the public design process. Sure, there are people for whom this runs the opposite way; they keep a close eye on updates and judge a piece of content on the merits of the latest iteration, but this is much less common. You could argue that one should simply disregard the ignorant people as blind comments don't contribute to the overall discussion, but you should remember that these players are also the ones voting for the respective piece of content. You could also suggest re-educating them, which is exactly what Jagex attempts with each developer blog, and still people won’t get the memo. And when it comes to the players themselves, can the playerbase really be relied on to re-educate itself?
Overall, the Second Great irony really hurts the development process and is practically an unavoidable issue. What’s the alternative? To remove the developer-player interface that leads to valuable reiterations, or does you simply have to get the skill perfect in the first developer blog?
It’s not an optimal idea, but it could help: have a small team of “delegates” – larger names that players can trust, or player influencers – come in to review a new, unannounced skill idea under NDA. If they like it, chances are that other players will too. If they don’t, reiterate or toss out the skill before it’s public. That way, you’ve had a board of experienced players who are willing to share their opinions to the public helping to determine the meat and potatoes of the skill before it is introduced to the casual eye. Now, a more polished and well-accepted product can be presented on the first run of selling a skill to the public, resulting in less reiterations being required, and demanding less effort from the average player to be fully informed over the skill’s final design.

5-2 - Presentation and Beta Testing

So you’ve got a great idea, but how are you going to sell it to the public? Looking at how the OSRS Team has handled it throughout the years, there’s a very obvious learning curve occurring. Artisan had almost nothing but text blogs being thrown to the players, Sailing started introducing some concept art and even a trailer with terrible audio recording, and Warding had concept art, in game models, gifs, and a much fancier trailer with in-game animations. A picture or video is worth a thousand words, and often the only words that players will take out of a developer blog.
You might say that presentation is everything, and that would be more true in OSRS than most games. Most activities in OSRS are extremely basic, involve minimal thought, and are incredibly grindy. Take Fishing: you click every 20 seconds on a fishing spot that is randomly placed along a section of water, get rid of your fish, then keep clicking those fishing spots. Boiling it down further, you click several arbitrary parts of your computer screen every 20 seconds. It’s hardly considered engaging, so why do some people enjoy it? Simply put: presentation. You’re given a peaceful riverside environment to chill in, you’re collecting a bunch of pixels shaped like fish, and a number tracking your xp keeps ticking up and telling you that it matters.
Now imagine coming to the players with a radical new skill idea: Mining. You describe that Mining is where you gather ores that will feed into Smithing and help create gear for players to use. The audience ponders momentarily, but they’re not quite sure it feels right and ask for a demonstration. You show them some gameplay, but your development resources were thin and instead of rocks, you put trees as placeholders. Instead of ores in your inventory, you put logs as placeholders. Instead of a pickaxe, your character is swinging a woodcutting axe as a placeholder. Sure, the mechanics might act like mining instead of woodcutting, but how well is the skill going to sell if you haven’t presented it correctly or respected it contextually?
Again, presentation is everything. Players need to be able to see the task they are to perform, see the tools they’ll use, and see the expected outcomes; otherwise, whatever you’re trying to sell will feel bland and unoriginal. And this leads to the next level of skill presentation that has yet to be employed: Beta Worlds.
Part of getting the feel of an activity is not just watching, it but acting it out as well - you’ll never understand the thrill of skydiving unless you’ve actually been skydiving. Beta Worlds are that chance for players to act out a concept without risking the real game’s health. A successful Beta can inspire confidence in players that the skill has a solid Core and interesting Expansions, while a failed Beta will make them glad that they got to try it and be fully informed before putting the skill to a poll (although that might be a little too optimistic for rage culture). Unfortunately, Betas are not without major disadvantages, the most prominent of which we shall investigate next.

5-3 - Development Effort

If you thought that the previous section on Skill Design Philosophy was lengthy and exhausting, imagine having to know all that information and then put it into practice. Mentally designing a skill in your head can be fun, but putting all that down on paper and making it actually work together, feel fully fleshed out, and following all the modern standards that players expect is extremely heavy work, especially when it’s not guaranteed to pay off in the polls like Quest or Slayer content. That’s not even taking into account the potentially immense cost of developing a new skill should it pass a poll.
Whenever people complain that Jagex is wasting their resources trying to make that specific skill work, Jagex has been very explicit about the costs to pull together a design blog being pretty minimal. Looking at the previous blogs, Jagex is probably telling the truth. It’s all just a bunch of words, a couple art sketches, and maybe a basic in-game model or gif. Not to downplay the time it takes to write well, design good models, or generate concept art, but it’s nothing like the scale of resources that some players make it out to be. Of course, if a Beta was attempted as suggested last section, this conversation would take a completely new turn, and the level of risk to invested resources would exponentially increase. But this conversation calls to mind an important question: how much effort and resources do skills require to feel complete?
Once upon a time, you could release a skill which was more or less unfinished. Take Slayer: it was released in 2005 with a pretty barebones structure. The fundamentals were all there, but the endgame was essentially a couple cool best-in-slot weapons and that was it. Since then, OSRS has updated the skill to include a huge Reward Shop system, feature 50% more monsters to slay, and to become an extremely competitive money-maker. Skills naturally undergo development over time, but it so often comes up during the designing of an OSRS skill that it "doesn't have enough to justify its existence." This was touched on deeply in Section 3-13 – Skill Bloat, but deserves reiterating here. While people recognize that skills continually evolve, the modern standard expects a new skill, upon release, to be fully preassembled before purchase. Whereas once you could get away with releasing just a skill's Core and working on Expansions down the line, that is no longer the case. But perhaps a skill might stand a better chance now than it did last year, given that the OSRS Team has doubled in number since that time.
However, judging from the skill design phases that have previously been attempted (as we’ve yet to see a skill development phase), the heaviest cost has been paid in developer mentality and motivational loss. When a developer is passionate about an idea, they spend their every waking hour pouring their mind into how that idea is going to function, especially while they’re not at work. And then they’re obligated to take player feedback and adapt their ideas, sometimes starting from scratch, particularly over something as controversial as a skill. Even if they have tough enough skin to take the heavy criticism that comes with skill design, having to write and rewrite repeatedly over the same idea to make it “perfect” is mentally exhausting. Eventually, their motivation drains as their labour bears little fruit with the audience, and they simply want to push it to the poll and be done with it. Even once all their cards are down, there’s still no guarantee that their efforts will be rewarded, even less so when it comes to skills.
With such a high mental cost with a low rate of success, you have to ask, “Was it worth it?” And that’s why new skill proposals are far and few between. A new skill used to be exciting for the development team in the actual days of 2007, as they had the developmental freedom to do whatever they wanted, but in the modern day that is not so much the case.

5-4 - The Problems of Democracy

Ever since the conceptualization of democracy in the real world, people have been very aware of its disadvantages. And while I don’t have the talent, knowledge, or time to discuss every one of these factors, there are a few that are very relevant when it comes to the OSRS Team and the polling process.
But first we should recognize the OSRS Team’s relationship with the players. More and more, the Team acts like a government to its citizens, the players, and although this situation was intentionally instated with OSRS’s release, it’s even more prominent now. The Team decides the type of content that gets to go into a poll, and the players get their input over whether that particular piece makes it in. Similarly, players make suggestions to the Team that, in many cases, the Team hadn’t thought of themselves. This synergy is phenomenal and almost unheard of among video games, but the polling system changes the mechanics of this relationship.
Polls were introduced to the burned and scarred population of players at OSRS’s release in 2013. Many of these players had just freshly come off RS2 after a series of disastrous updates or had quit long before from other controversies. The Squeal of Fortune, the Evolution of Combat, even the original Wilderness Removal had forced numerous players out and murdered their trust in Jagex. To try and get players to recommit to Runescape, Jagex offered OSRS a polling system by which the players would determine what went into the game, where the players got to hold all the cards. They also asked the players what threshold should be required for polled items to pass, and among the odd 50% or 55% being shouted out, the vast majority of players wanted 70%, 75%, 80%, or even 85%. There was a massive population in favour of a conservative game that would mostly remain untouched, and therefore kept pure from the corruption RS2 had previously endured.
Right from the start, players started noticing holes in this system. After all, the OSRS Team was still the sole decider of what would actually be polled in the first place. Long-requested changes took forever to be polled (if ever polled at all) if the OSRS Team didn’t want to deal with that particular problem or didn’t like that idea. Similarly, the Team essentially had desk jobs with a noose kept around their neck – they could perform almost nothing without the players, their slave masters, seeing, criticizing, and tearing out every inch of developmental or visionary freedom they had. Ever hear about the controversy of Erin the duck? Take a look at the wiki or do a search through the subreddit history. It’s pretty fantastic, and a good window into the minds of the early OSRS playerbase.
But as the years have gone on, the perspective of the players has shifted. There is now a much healthier and more trusting relationship between them and the Team, much more flexibility in what the players allow the Team to handle, and a much greater tolerance and even love of change.
But the challenges of democracy haven’t just fallen away. Everyone having the right to vote is a fundamental tenet of the democratic system, but unfortunately that also means that everyone has the right to vote. For OSRS, that means that every member, whether it’s their first day in game, their ten thousandth hour played, those who have no idea about what the poll’s about, those who haven’t read a single quest (the worst group), those who RWT and bot, those who scam and lure, and every professional armchair developer like myself get to vote. In short, no one will ever be perfectly informed on every aspect of the game, or at least know when to skip when they should. Similarly, people will almost never vote in favour of making their game harder, even at the cost of game integrity, or at least not enough people would vote in such a fashion to reach a 75% majority.
These issues are well recognized. The adoption of the controversial “integrity updates” was Jagex’s solution to these problems. In this way, Jagex has become even more like a government to the players. The average citizen of a democratic country cannot and will not make major decisions that favour everyone around themselves if it comes at a personal cost. Rather, that’s one of the major roles of a government: to make decisions for changes for the common good that an individual can’t or won’t make on their own. No one’s going to willingly hand over cash to help repave a road on the opposite side of the city – that’s why taxes are a necessary evil. It’s easy to see that the players don’t always know what’s best for their game and sometimes need to rely on that parent to decide for them, even if it results in some personal loss.
But players still generally like the polls, and Jagex still appears to respect them for the most part. Being the government of the game, Jagex could very well choose to ignore them, but would risk the loss of their citizens to other lands. And there are some very strong reasons to keep them: the players still like having at least one hand on the wheel when it comes to new content or ideas. Also, it acts as a nice veto card should Jagex try to push RS3’s abusive tactics on OSRS and therefore prevent such potential damage.
But now we come to the topic of today: the introduction of a new skill. Essentially, a new skill must pass a poll in order to enter the game. While it’s easy to say, “If a skill idea is good enough, it’ll pass the threshold,” that’s not entirely true. The only skill that could really pass the 75% mark is not necessarily a well-designed skill, but rather a crowd-pleasing skill. While the two aren’t mutually exclusive, the latter is far easier to make than the former. Take Dungeoneering: if you were to poll it today as an exact replica of RS2’s version, it would likely be the highest scoring skill yet, perhaps even passing, despite every criticism that’s been previously emphasized describing why it has no respect for the current definition of “skill.” Furthermore, a crowd-pleasing skill can easily fall prey to deindividualization of vision and result in a bland “studio skill” (in the same vein as a “studio film”), one that feels manufactured by a board of soulless machines rather than a director’s unique creation. This draws straight back to the afore-mentioned issues with democracy: that people A) don’t always understand what they’re voting for or against, and B) people will never vote for something that makes their game tougher or results in no benefit to oneself. Again, these were not issues in the old days of RS2, but are the problems we face with our modern standards and decision making systems.
The reality that must be faced is that the polling system is not an engine of creation nor is it a means of constructive feedback – it’s a system of judgement, binary and oversimplified in its methodology. It’s easy to interact with and requires no more than 10 seconds of a player’s time, a mere mindless moment, to decide the fate of an idea made by an individual or team, regardless of their deep or shallow knowledge of game mechanics, strong or weak vision of design philosophy, great or terrible understanding of the game’s history, and their awareness of blindness towards the modern community. It’s a system which disproportionately boils down the quality of discussion that is necessitated by a skill, which gives it the same significance as the question “Should we allow players to recolour the Rocky pet by feeding it berries?” with the only available answers being a dualistic “This idea is perfect and should be implemented exactly as outlined” or “This idea is terrible and should never be spoken of again.”
So what do you do? Let Jagex throw in whatever they want? Reduce the threshold, or reduce it just for skills? Make a poll that lists a bunch of skills and forces the players to choose one of them to enter the game? Simply poll the question, “Should we have a new skill?” then let Jagex decide what it is? Put more options on the scale of “yes” to “no” and weigh each appropriately? All these options sound distasteful because there are obvious weaknesses to each. But that is the Third Great Irony we face: an immense desire for a new skill, but no realistic means to ever get one.

6-0 - Conclusion

I can only imagine that if you’ve truly read everything up to this point, it’s taken you through quite the rollercoaster. We’ve walked through the history of OSRS skill attempts, unconstructive arguments, various aspects of modern skill design philosophy, and the OSRS Team and skill design process. When you take it all together, it’s easy to get overwhelmed by all the thought that needs to go into a modern skill and all the issues that might prevent its success. Complexity, naming conventions, categorizations, integration, rewards and motivations, bankstanding and buyables, the difficulties of skill bloat, balancing, and skill endgames, aesthetics, the design process, public presentation, development effort, democracy and polling - these are the challenges of designing and introducing modern skills. To have to cope with it all is draining and maybe even impossible, and therefore it begs the question: is trying to get a new skill even worth it?
Maybe.
Thanks for reading.
Tl;dr: Designing a modern skill requires acknowledging the vast history of Runescape, understanding why players make certain criticisms and what exactly they’re saying in terms of game mechanics, before finally developing solutions. Only then can you subject your ideas to a polling system that is built to oversimplify them.
submitted by ScreteMonge to 2007scape [link] [comments]

Dice & Card Randomizers for combat (PvP/Dungeon Crawls/etc.) - Your favorite?

I'm trying to figure out what kind of mechanic is the most engaging to players when it comes to simulating combat and/or adventure, independent of how it ties in to the rest of the game. There are many different options, and I think they all have some advantages and disadvantages, but I thought to ask you fellow designers what you have enjoyed. Examples are top of mind, let me know if you have any other prominent examples.
Some examples:
Beat Target Number (Multiple examples)
Once your attacks and targets have been chosen, roll (usually) one die to see if you match or beat the target number. If you do, you score a hit.
This method is probably the most simple, as it can be reduced down to one die. It's also very quick, and provides the randomness we're looking for. It becomes easy to determine if you hit. It also becomes easy to modify the attack power and/or target number in a number of ways, such as adding extra dice or modifying the input number or the target number by +/- a few steps. This method is used in D&D using a D20 die, but can be used in a much simpler way. A clear disadvantage is that it is extremely random - the probability distribution is quite static, and the result is binary: Fail or succeed. You could use the success rate to create a non-binary output ("by how much you beat the score"), but it becomes very swingy.
Beat Target Number w. Pool (Mansion of Madness)
Before (or after) the target number is known, roll a pool of dice and count the number of successes. You may optionally expend resources to reroll or to modify your results to improve your success score. Compare this score to a target number.
In comparison to the previous method, it gives that player more control of the number. It also becomes easy to stepwise increase your odds of success (by adding more dice or expending resources). The result is not as binary as in the previous method.
Opposed Success (Descent, Warhammer Quest, etc.)
Roll a number of dice to determine success. Opponent rolls a number of dice to determine the target number. Either a success is gained if your numbers beat/exceed the target number, or a ranked success happens (count the excess).
Good for determining damage. Requires more dice rolls. This type of randomizer is commonly used to compare "attack damage" vs "defense". It's good when the defender is also a player, to give some sensation of "active defense", even if it's just done through a die roll. This type of mechanic seems common in dice-based dungeon crawlers.
Sequential Dice Pool Success (Warhammer)
Roll a high number of dice towards target number (opposed or not). Count successes for this category of success, and resolve results. Optionally, roll the results again to determine damage (hit -> damage) or perform another operation. The difference between this type and "Beat Target Number w. Pool" is that the pools tend to be larger, the target tends to be variable, the success score tends to be non-binary, and the pools tend to be sequential.
It's a very time-consuming exercise, but provides great level of detail in multiple dimensions. If these dimensions can be used to generate "playability" or "depth", it can be a successful implementation. This type of mechanic is probably most common in wargame scenarios with large armies battling.
Multi-Category Dice Pool Success (World of Warcraft Board Game)
Simultaneously roll one or more dice pools (colored dice, f.ex.) to determine success (on a scale, eventually beating a target number) on different parts of the combat operation, add eventual modifiers then resolve successes for the different dice pools.
In the WoW example, the dice pools are "ranged/spell attack", "melee attack" and "defense" (blue, red, green), and they resolve differently based on their success rates. Melee attacks happen after retaliation, and ranged/spell attack happen before retaliation.
Yahtzee Output (King of Tokyo)
Roll five dice. Reroll any number of dice. Reroll any number of dice. Check results for effect(s).
This has the advantage of providing some level of player control. King of Tokyo uses it well because you can not only score damage this way, but gain score and currency. There are always "things to do" and to aim for. It easily becomes the center of the game.
Yahtzee Input (Dice Throne)
Roll five dice. Reroll any number. Reroll any number. Check results for resources available. Use these resources to perform various abilities (or combos).
Again, this provides player control. It also makes it easy to diversify the output possibilities, and create even more unique characters than in a Yahtzee output kind of game. You can trigger special abilities with different combinations of input. I guess it becomes a little bit more frustrating with the chance to miss important abilities, and it can create situations where each turn is relatively similar or relatively "swingy".
Deck of Modifiers (Gloomhaven)
Once your attacks (or abilities) have been chosen, flip over the top card of a deck to randomly modify the end result.
You need a main input for your combat in this example - in Gloomhaven it comes from your hand of cards. It makes it easy to evolve the modifier deck over time (if it is individual) and gives some level of control over the randomness, as the distribution evens out before the deck gets reshuffled. It also has a bit of "fun" factor to it (flipping the card is fast and exciting). A disadvantage to me is that it seems to limit the output to one variable (f.ex. damage), especially once you add specialised cards.
Command Decks (Game of Thrones)
Each player has a deck of cards with varying numbers. For each combat encounter, a card (per player) is selected, and combined with other input factors determines the non-random but unknown total output to compare for output.
Command Decks are great for PvP games, but lose some meaning in PvE. One advantage is that the command cards can contain additional bonuses, not only target numbers. For example "If you win this combat, defeat an additional enemy unit", or "Retreat safely from this combat encounter".
QUESTION: What's your favorite mechanic out of these (or others) for "fun" and strategic impact? Do you have any fun stories about what happens?
EDIT: Some categories have been added.
submitted by Lightboxr to tabletopgamedesign [link] [comments]

Cloaked or Stealth Units?

Hi everyone! Since the Frost Giant announcement I have been super excited and keep turning ideas over in my head - probably the same as most of the people who might read this. In RTS games there are some mechanics/ideas which are optional to have in the game but are generally considered 'an RTS thing'.
I was going to make one big dump of all my thoughts on these mechanics but then I thought it might be better to split them up into individual posts, to keep the discussion focused and allow it to be easier to find the discussion on a certain thing. I am talking about ideas like:
High ground, air units, sea units, cloaked units, multiple building selection, global abilities, destructible terrain, how resources should work, etc etc etc. What does the community think about each one? Should they even be in the best RTS of all time? If so, then how will they work?

Cloaked/Stealth/Invisible Units

The current understanding we have of cloaked units is this:
Based on these points, we can also say:
With a new RTS, these points may of course be subject to change. Personally I think the binary nature of detection is one to think about.
I think everything added to the game must add something unique and positive to gameplay. Do cloaked units add that? Here is what I think they bring to an RTS:
More Complexity
Cloaked units add another layer of complexity to the game. As the opposing player, the possibility of cloaked units plays a role in the decisions you make as you grow your army and base, and influences your scouting.
Limiting Possibilities
Some strategies are simply not viable in the meta given the existence of cloaked units. Usually these are the sillier ones.
Base Defense
If an enemy player's detection is mostly static, then it allows cloaked units to do well against an army encroaching on home turf. An example would be lurkers defending forward bases in BW ZvT, or a Protoss warping in defensive DTs in SC2.
Comeback Mechanic
If one player is beating the other player in most aspects of the game, such as map control, army size, wealth, then the opposing player can make a comeback if that player has forgotten to also build detection.
Defense mechanism for otherwise fragile specialist unit
There are some units have seen in RTS that could not perform their function without being stealthed, or radically changed in concept so as to have much higher health. Something like the shade in WC3, the observer in Starcraft or even something like an SC2 Ghost launching a nuke on a non-forward base.
Surprise Motherf###er!
The surprise or ambush mechanic is a fun one, especially to watch. Lurkers lying in wait, or burrowed banelings are great moments of tension and excitement in Starcraft matches.

Conclusion

Even after all these points in their favour, I still don't like them! I think the mechanic of having detection vs not having detection is too black and white. There are no other interactions in RTS like this. Even if you take a one-sided fight such as marines vs collosus, or archers vs mangonel there is still room for skillful on the disadvantaged side to bring about a victory through superior unit control.
I think Artosis would agree with me that there is nothing more frustrating than doing 99 things right, but one thing "wrong" (or skipping, for economy) and then losing to invisible men in the base.
If they were implemented, I would prefer a diluted version such as limited to scouting units, or the Zerg-style burrow; the unit is stealthed but pays by giving up the ability to move. A similar trade off could be a unit which is stealthed until it attacks, similar to how some stealthed MOBA characters work.

What do you guys think?
submitted by ZranaSC2 to FrostGiant [link] [comments]

A trans person's measured take on the trans sports issue

So first of all this post was inspired by GGExMachina's brief statement on the issue:
For example, it is objectively the case that biological men have a physical advantage over women. Yet if someone points this out and suggests that transgender people shouldn’t be allowed to fight in women’s UFC, or women’s soccer or weightlifting competitions or whatever, suddenly you’re some kind of evil monster. Rather than saying that of course trans people shouldn’t be bullied and that we could perhaps have a trans olympics (like the Paralympics and Special Olympics), we are expected to lie.
I've found that this position is incredibly popular among liberals/left-leaning people, especially here on reddit. It seems like, once or twice a month, like clockwork, a thread stating more or less the same thing on /unpopularopinion or /offmychest will get thousands of upvotes. And while I completely understand the thought process that leads otherwise left-leaning people to come to such conclusions, I feel like the issue has been, broadly speaking, dishonestly presented to the general public by a mixture of bad-faith actors and people who have succumbed to the moral panic. And, as I've seen, there are plenty of people in this subreddit and elsewhere who are itching to be as supportive as they possibly can to the trans community but find themselves becoming very disillusioned by this particular issue. By making this post I hope to present a more nuanced take on the issue, not only in regards to my personal beliefs on what kinds of policies are best to preserve fairness in women's sports but also in regards to shining a light on how this issue is often times dishonestly presented in an attempt to impede the progression of pro-trans sentiments in the cultural zeitgeist.

Sex & Gender

The word "transgender" is an umbrella term that refers to people whose gender identities differ from those typically associated with the sex they were assigned at birth. According to the 2015 U.S. Transgender Survey, the approximate composition of "the trans community" in the United States is 29% Transgender men (Female-to-Male), 33% Transgender women (Male-to-Female), and 35% non-binary. (The remaining 3% were survey respondents who self-identified as "crossdressers", who were still included in the survey on the grounds of being gender non-conforming)
While non-binary people, as a group, are probably deserving of their own separate post. the focus of this post will be on trans men and trans women. I will also be primarily focusing on transgender people who pursue medical transition with Hormone-Replacement-Therapy, as they are most relevant to the issue of sports. (Mind that while the majority of binary trans people fit into this camp, there is a sizable minority of trans people who do not feel the need to medically transition.)
What do trans people believe about Gender?
The views of transgender people in regards to Gender are actually pretty varied, although the most prominent positions that I've personally seen are best summed up into two different camps:
  1. The "Trans-Medical" camp
Transgender people who fall into this camp usually consider Gender Dysphoria to be the defining factor of what makes somebody trans. The best way I can describe this camp is that they sort of view being transgender akin to being intersex. Only whereas an intersex person would be born with a disorder that affects the body, a trans person is born with a disorder that affects the brain. Trans people in this camp often times put an emphasis on a clinical course for treatment. For example, a person goes to a psychologist, gets diagnosed with gender dysphoria, starts hormone replacement therapy, pursues surgery, then emerges from this process of either cured of the gender dysphoria or, at the very least, treated to the fullest extent of medical intervention. This position is more or less the original position held by trans activists, back in the day when the word "transsexual" was used instead of "transgender". Though many younger trans people, notably YouTuber Blaire White, also hold this position. Under this position, sex and gender are still quite intertwined, but a trans man can still be considered a man, and a trans woman a woman, under the belief that sex/gender doesn't just refer to chromosomal sex and reproductive organs, but also to neurobiology, genitalia, and secondary sex characteristics. So someone who is transgender, according to this view, is born with the physical characteristics of one sex/gender but the neurobiology of another, and will change their physical characteristics, to the fullest extent medically possible, to match the neurobiology and therefore cure the individual of gender dysphoria.
Critics of this position argue that this mentality is problematic due to being inherently exclusive to transgender people who do not pursue medical transition, whom are often times deemed as "transtrenders" by people within this camp. Many people find it additionally problematic because it is also inherently exclusive to poorer trans people, particularly those in developing nations, who may not have access to trans-related medical care. Note that there are plenty of trans people who *do* have access to medical transition, but nevertheless feel as if the trans community shouldn't gatekeep people who cannot afford or do not desire medical transition, thus believing in the latter camp.
  1. The "Gender Identity" camp
I feel like this camp is the one most popularly criticized by people on the right, but is also probably the most mainstream. It is the viewpoint held by many more left-wing trans people, (Note that in the aforementioned 2015 survey, only 1% of trans respondents voted Republican, so trans people are largely a pretty left-wing group, therefore it makes sense that this position would be the most mainstream) but also notably held by American Psychological Association, the American Psychiatric Association, GLAAD, and other mainstream health organizations and activist groups.
While people in this camp still acknowledge that medical transition to treat gender dysphoria can still be a very important aspect of the transgender experience, it's believed that the *defining* experience is simply having a gender identity different from the one they were assigned at birth. "Gender identity" simply being the internal, personal sense of being a man, a woman, or outside the gender binary.
Many people in this camp, though, still often maintain that gender identity is (at least partially) neurobiological, but differ from the first camp in regards to acknowledging that the issue is less black & white than an individual simply having a "male brain" or a "female brain", but rather that the neurological characteristics associated with gender exist on more of a spectrum, thus leaving the door open to gender non-conforming people who do not identify as trans, as well as to non-binary people. This is where the "gender is a spectrum" phrase comes from.
"52 genders" is a popular right-wing meme that makes fun of this viewpoint, however it is important to note that many trans and non-binary people disagree with the idea of quantifying gender identity to such an absurd amount of individual genders, rather more simply maintaining that there are men, women, and a small portion of people in-between, with a few words such as "agender" or "genderqueer" being used to describe specific identities/presentations within this category.
It's also noteworthy that not all people in this camp believe that neurobiology is the be-all-end-all of gender identity, as many believe that the performativity of gender also plays an integral role in one's identity. (That gender identity is a mixture of neurobiology and performativity is a position held by YouTuber Contrapoints)
Trans people and biological sex
So while the aforementioned "Gender Identity" viewpoint has become quite popularized among liberals and leftists, I have noticed a certain rhetorical mentality/assumption become prevalent alongside it, especially among cisgender people who consider themselves trans-allies:
"Sex and Gender are different. A trans woman is a woman who is biologically male. A trans man is a man who is biologically female"
When "Sex" is defined by someone's chromosomes, or the sex organs they were born with, this is correct. However, there is a pretty good reason why the trans community tends to prefer terms like "Assigned Male at Birth" rather than "Biologically Male". This is done not only for the inclusion of people who are both intersex and transgender (For example, someone can be born intersex but assigned male based on the existence of a penis or micropenis), but also due to the aforementioned viewpoint on divergent neurobiology being the cause for gender dysphoria. Those reasons are why the word "Assigned" is used. But the reason why it's "Assigned Male/Female At Birth" instead of just "Assigned Male/Female" is because among the trans community there exists an understanding of the mutability of sexually dimorphic biology that the general population is often ignorant to. For example, often times people (especially older folks) don't even know of the existence of Hormone Replacement Therapy, and simply assume that trans people get a single "sex change operation" that, (for a trans woman) would just entail the removal of the penis and getting breast implants. Therefore they imagine the process to be "medically sculpting a male to look female" instead of a more natural biological process of switching the endocrine system form male to female or vice versa and letting the body change over the course of multiple years. It doesn't help that, for a lot of older trans people (namely Caitlyn Jenner, who is probably the most high profile trans person sadly), the body can be a lot more resistant to change even with hormones so they *do* need to rely on plastic surgery a lot more to get obvious results)
So what sexually dimorphic bodily characteristics can one expect to change from Hormone Replacement Therapy?
(Note that there is a surprising lack of studies done on some of the more intricate changes that HRT can, so I've put a "*" next to the changes that are anecdotal, but still commonly and universally observed enough among trans people [including myself for the MTF stuff] to consider factual. I've also put a "✝" next to the changes that only occur when people transition before or during puberty)
Male to Female:
Female to Male:
For the sake of visual representation, here are a couple of images from /transtimelines to demonstrate these changes in adult transitioners (I've specifically chosen athletic individuals to best demonstrate muscular changes)
https://preview.redd.it/ntw333p9sbty.jpg?width=640&crop=smart&auto=webp&s=5fe779757dfc4a5dc56566ff648d337c59fbe5cb
https://www.reddit.com/transtimelines/comments/dpca0f/3_years_on_vitamin_t/
Additionally, here's a picture of celebrity Kim Petras who transitioned before male puberty, in case you were wondering what "female pubescent skeletal development" looks like in a trans woman:
https://cdn2.thelineofbestfit.com/images/made/images/remote/https_cdn2.thelineofbestfit.com/portraits/kim_petras_burakcingi01_1107_1661_90.jpg

How does this relate to sports?

Often times, when the whole "transgender people in sports" discussion arises, a logical error is made when *all* transgender people are assumed to be "biologically" their birth sex. For example, when talking about trans women participating in female sports, these instances will be referred to as cases of "Biological males competing against females".
As mentioned before, calling a trans woman "biologically male" strictly in regards to chromosomes or sex organs at birth would be correct. However, not only can it be considered derogatory (the word "male" is colloquially a shorthand for "man", after all), but there are many instances where calling a post-HRT transgender person "biologically [sex assigned at birth]" is downright misleading.
For example, hospitals have, given transgender patients improper or erroneous medical care by assuming treatment based on birth sex where treatment based on their current endocrinological sex would have been more adequate.
Acute Clinical Care of Transgender Patients: A Review
Conclusions and relevance: Clinicians should learn how to engage with transgender patients, appreciate that unique anatomy or the use of gender-affirming hormones may affect the prevalence of certain disease (eg, cardiovascular disease, venous thromboembolism, and osteoporosis), and be prepared to manage specific issues, including those related to hormone therapy. Health care facilities should work toward providing inclusive systems of care that correctly identify and integrate information about transgender patients into the electronic health record, account for the unique needs of these patients within the facility, and through education and policy create a welcoming environment for their care.
Some hosptials have taken to labeling the biological sex of transgender patients as "MTF" (for post-HRT trans women) and "FTM" (for post-HRT trans men), which is a much more medically useful identifier compared to their sex assigned at birth.
In regards to the sports discussion, I've seen *multiple threads* where redditors have backed up their opinions on the subject of trans people in sports with studies demonstrating that cis men are, on average, more athletically capable than cis women. Which I personally find to be a pathetic misunderstanding of the entire issue.
Because we're not supposed to be comparing the athletic capabilities of natal males to natal females, here. We're supposed to comparing the athletic capabilities of *post-HRT male-to-females* to natal females. And, if we're going to really have a fact-based discussion on the matter, we need to have separate categories for pre-pubescent and post-pubescent transitioners. Since, as mentioned earlier, the former will likely have different skeletal characteristics compared to the latter.
The current International Olympic Committee (IOC) model for trans participation, and criticisms of said model
(I quoted the specific guidelines from the International Cycling Union, but similar guidelines exist for all Olympic sports)
Elite Competition
At elite competition levels, members may have the opportunity to represent the United States and participate in international competition. They may therefore be subject to the policies and regulations of the International Cycling Union (UCI) and International Olympic Committee (IOC). USA Cycling therefore follows the IOC guidelines on transgender athletes at these elite competition levels. For purposes of this policy, international competition means competition sanctioned by the UCI or competition taking place outside the United States in which USA Cycling’s competition rules do not apply.
The IOC revised its guidelines on transgender athlete participation in 2015, to focus on hormone levels and medical monitoring. The main points of the guidelines are:
Those who transition from female to male are eligible to compete in the male category without restriction. It is the responsibility of athletes to be aware of current WADA/USADA policies and file for appropriate therapeutic use exemptions.
Those who transition from male to female are eligible to compete in the female category under the following conditions:
The athlete has declared that her gender identity is female. The declaration cannot be changed, for sporting purposes, for a minimum of four years.
The athlete must demonstrate that her total testosterone level in serum has been below 10 nmol/L for at least 12 months prior to her first competition (with the requirement for any longer period to be based on a confidential case-by-case evaluation, considering whether or not 12 months is a sufficient length of time to minimize any advantage in women’s competition).
The athlete's total testosterone level in serum must remain below 10 nmol/L throughout the period of desired eligibility to compete in the female category.
Compliance with these conditions may be monitored by random or for-cause testing. In the event of non-compliance, the athlete’s eligibility for female competition will be suspended for 12 months.
Valid criticisms of the IOC model are usually based on the fact that, even though hormone replacement therapy provokes changes to muscle mass, it does *not* shrink the size of someone's skeleton or cardiovascular system. Therefore an adult-transitioned trans woman could, even after losing all levels of male-typical muscle mass, still have an advantage in certain sports if she had an excessively large skeletal frame, and was participating in a sport where such a thing would be advantageous.
Additionally, the guidelines only require that athletes be able to demonstrate having had female hormone levels for 12-24 months, which isn't necessarily long enough to completely lose musculature gained from training on testosterone (anecdotally it can take 2-4 years to completely lose male-typical muscle mass) So the IOC guidelines don't have any safeguard against, for example, a trans woman training with testosterone as the dominant hormone in her body, and then taking hormones for the bare minimum time period and still having some of the advantage left.
Note that, while lower level sports have had (to the glee of right-wing publications sensationalizing the issue) instances of this exact thing happening, in the 16 years since these IOC guidelines were established, not a single transgender individual has won an Olympic medal
Also note that none of the above criticisms of the IOC policy would apply in regards to the participation of pre-pubescent-transitioned trans women. After all, male-pubescent bone structure and cardiovascular size, and male-typical muscle levels, can't possibly exist if you never went through male puberty to begin with.
What could better guidelines entail, to best preserve fairness in female sports while avoiding succumbing to anti-trans moral panic?
In my personal opinion, sports leagues should pick one of the three above options depending on what best fits the nature of the sport and the eliteness of the competition. For example, extremely competitive contact sports might be better off going with the first option, but an aerobic sport such as marathon running would probably be fine with the third option.

How this issue has been misrepresented by The Right

I'll use Joe Rogan as an example of this last thing:
She calls herself a woman but... I tend to disagree. And, uh, she, um... she used to be a man but now she has had, she's a transgender which is (the) official term that means you've gone through it, right? And she wants to be able to fight women in MMA. I say no f***ing way.
I say if you had a dick at one point in time, you also have all the bone structure that comes with having a dick. You have bigger hands, you have bigger shoulder joints. You're a f***ing man. That's a man, OK? You can't have... that's... I don't care if you don't have a dick any more...
If you want to be a woman in the bedroom and you know you want to play house and all of that other s*** and you feel like you have, your body is really a woman's body trapped inside a man's frame and so you got a operation, that's all good in the hood. But you can't fight chicks. Get the f*** out of here. You're out of your mind. You need to fight men, you know? Period. You need to fight men your size because you're a man. You're a man without a dick.
I'm not trying to discriminate against women in any way, shape, or form and I'm a big supporter of women's fighting. I loved watching that Ronda Rousey/Liz Carmouche fight. But those are actual women. Those are actual women. And as strong as Ronda Rousey looks, she's still looks to me like a pretty girl. She's a beautiful girl who happens to be strong. She's a girl! [Fallon Fox] is not a girl, OK? This is a [transgender] woman. It's a totally different specification.
Calling a trans woman a "man", and equating transitioning to merely removal of the dick, and equating trans women's experiences as women as "playing house" and "being a woman in the bedroom". These things are obviously pretty transphobic, and if Rogan had said these things about just any random trans woman his statements would have likely been more widely seen in that light. But when it's someone having an unfair advantage in sports, and the audience is supposed to be angry with you, it's much more socially acceptable thing to say such things. But the problem is, when you say these kinds of things about one trans woman, you're essentially saying those derogatory things about all trans women by extension. It's the equivalent of using an article about a black home invader who murdered a family as an excuse to use a racial slur.
Now, I'm not saying that Rogan necessarily did this on purpose, in fact I'm more inclined to believe that it was done moreso due to ignorance rather than having an actual ideological agenda. But since then, many right wing ideologues who do have an ideological agenda have used this issue as an excuse to voice their opinions on trans people while appearing to be less bigoted. Ie. "I'm not trying to be a bigot or anything and I accept people's rights to live their lives as they see fit, but we NEED to keep men out of women's sports", as a sly way to call trans women "men".
Additionally, doing this allows them to slip in untrue statements about the biology of trans women. I mean, first of all in regards to the statement "You have bigger hands, you have bigger shoulder joints", obviously even in regards to post-pubescent transitioners, not every trans woman is going to have bigger hands and shoulder joints than every cis woman (My hands are actually smaller than my aunt's!). It's just that people who go through male puberty on average tend to have bigger hands and shoulder joints compared to people who go through female puberty. But over-exaggerating the breadth of sexual dimorphism, as if males and females are entirely different species to each-other, helps to paint the idea of transitioning in a more nonsensical light.
I hope this thread has presented this issue in a better light for anyone reading it. Let me know if you have any thoughts/criticisms of my stances or the ways I went about this issue.
submitted by Rosa_Rojacr to samharris [link] [comments]

The internals of Android APK build process - Article

The internals of Android APK build process - Article

Table of Contents

  • CPU Architecture and the need for Virtual Machine
  • Understanding the Java Virtual Machine
  • Compiling the Source Code
  • Android Virtual Machine
  • Compilation Process to .dex
  • ART over Dalvik
  • Understanding each part of the build process.
  • Source Code
  • Resource Files
  • AIDL Files
  • Library Modules
  • AAR Libraries
  • JAR Libraries
  • Android Asset Packaging Tool
  • resources.arsc
  • D8 and R8
  • Dex and Multidex
  • Signing the APK
  • References
Understanding the flow of the Android APK build process, the execution environment, and code compilation blog post aims to be the starting point for developers to get familiar with the build process of Android APK.

CPU Architecture and the need for Virtual Machine

Unveiled in 2007, Android has undergone lots of changes related to its build process, the execution environment, and performance improvements.
There are many fascinating characteristics in Android and one of them is different CPU architectures like ARM64 and x86
It is not realistic to compile code that supports each and every architecture. This is where Java Virtual Machine is used.
https://preview.redd.it/91nrrk3twxk51.png?width=1280&format=png&auto=webp&s=a95b8cf916f000e94c6493a5780d9244e8d27517

Understanding the Java Virtual Machine

JVM is a virtual machine that enables a computer to run applications that are compiled to Java bytecode. It basically helps us in converting the compiled java code to machine code.
By using the JVM, the issue of dealing with different types of CPU architecture is resolved.
JVM provides portability and it also allows Java code to be executed in a virtual environment rather than directly on the underlying hardware.
But JVM is designed for systems with huge storages and power, whereas Android has comparatively low memory and battery capacity.
For this reason, Google has adopted an Android JVM called Dalvik.

https://preview.redd.it/up2os7juwxk51.png?width=1280&format=png&auto=webp&s=2a290bdc9be86fb08d67228c730329130da3bc63

Compiling the Source Code

Our Java source code for the Android app is compiled into a .class file bytecode by the javac compiler and executed on the JVM.
For Kotlin source code, when targeting JVM, Kotlin produces Java-compatible bytecode, thanks to kotlinc compiler.
To understand bytecode, it is a form of instruction set designed for efficient execution by a software interpreter.
Whereas Java bytecode is the instruction set of the Java virtual machine.

https://preview.redd.it/w2uzoicwwxk51.png?width=1280&format=png&auto=webp&s=b122e0781bf9e9ba236d34a87a636c9218f7ea35

Android Virtual Machine

Each Android app runs on its own virtual machine. From version 1.0 to 4.4, it was 'Dalvik'. In Android 4.4, along with Dalvik, Google experimentally introduced a new Android Runtime called 'ART'.
Android users had the option to choose either Dalvik or ART runtime in Android 4.4.
The .class files generated contains the JVM Java bytecodes.
But Android has its own optimized bytecode format called Dalvik from version 1.0 to 4.4. Dalvik bytecodes, like JVM bytecodes, are machine-code instructions for a processor.

https://preview.redd.it/sqychk81xxk51.png?width=217&format=png&auto=webp&s=49445fa42e4aa6f4008114a822f364580649fcdf

Compilation Process to .dex

The compilation process converts the .class files and .jar libraries into a single classes.dex file containing Dalvik byte-codes. This is possible with the dx command.
The dx command turns all of the .class and .jar files together into a single classes.dex file is written in Dalvik bytecode format.
To note, dex means Dalvik Executable.
https://preview.redd.it/g4z3tb95xxk51.jpg?width=831&format=pjpg&auto=webp&s=1cdbaacaf10cc529cccca2ba016583596781ee88

ART over Dalvik

Since Android 4.4, Android migrated to ART, the Android runtime from Dalvik. This execution environment executes .dex as well.
The benefit of ART over Dalvik is that the app runs and launches faster on ART, this is because DEX bytecode has been translated into machine code during installation, no extra time is needed to compile it during the runtime.
ART and Dalvik are compatible runtimes running Dex bytecode, so apps developed for Dalvik should work when running with ART.
The JIT based compilation in the previously used Dalvik has disadvantages of poor battery life, application lag, and performance.
This is the reason Google created Android Runtime(ART).
ART is based on Ahead - Of - Time (AOT) based compilation process where compilation happens before application starts.
In ART, the compilation process happens during the app installation process itself. Even though this leads to higher app installation time, it reduces app lag, increases battery usage efficiency, etc.
Even though dalvik was replaced as the default runtime, dalvik bytecode format is still in use (.dex)
In Android version 7.0, JIT came back. The hybrid environment combining features from both a JIT compiler and ART was introduced.
The bytecode execution environment of Android is important as it is involved in the application startup and installation process.
https://preview.redd.it/qh9bxsplzxk51.png?width=1280&format=png&auto=webp&s=bc40ba6c69cec2110b7d695fe23df094bf5aea6c

Understanding each part of the process.


https://preview.redd.it/obelgd7axxk51.png?width=950&format=png&auto=webp&s=299abcf4798ad4d2de93f4eb18b9d9e70dd54297

Source Code

Source code is the Java and Kotlin files in the src folder.

Resource Files

The resource files are the ones in the res folder.

AIDL Files

Android Interface Definition Language (AIDL) allows you to define the programming interface for client and service to communicate using IPC.
IPC is interprocess communication.
AIDL can be used between any process in Android.

Library Modules

Library module contains Java or Kotlin classes, Android components, and resources though assets are not supported.
The code and resources of the library project are compiled and packaged together with the application.
Therefore a library module can be considered to be a compile-time artifact.

AAR Libraries

Android library compiles into an Android Archive (AAR) file that you can use as a dependency for an Android app module.
AAR files can contain Android resources and a manifest file, which allows you to bundle in shared resources like layouts and drawables in addition to Java or Kotlin classes and methods.

JAR Libraries

JAR is a Java library and unlike AAR it cannot contain Android resources and manifests.

Android Asset Packaging Tool

Android Asset Packaging Tool (aapt2) compiles the AndroidManifest and resource files into a single APK.
At this point, it is divided into two steps, compiling and linking. It improves performance, since if only one file changes, you only need to recompile that one file and link all the intermediate files with the 'link' command.
AAPT2 supports the compilation of all Android resource types, such as drawables and XML files.
When you invoke AAPT2 for compilation, you should pass a single resource file as an input per invocation.
AAPT2 then parses the file and generates an intermediate binary file with a .flat extension.
The link phase merges all the intermediate files generated in the compile phase and outputs one .apk file. You can also generate R.java and proguard-rules at this time.

resources.arsc

The output .apk file does not include the DEX file, so the DEX file is not included, and since it is not signed, it is an APK that cannot be executed.
This APK contains the AndroidManifest, binary XML files, and resources.arsc.
This resource.arsc contains all meta-information about a resource, such as an index of all resources in the package.
It is a binary file, and the APK that can be actually executed, and the APK that you often build and execute are uncompressed and can be used simply by expanding it in memory.
The R.java that is output with the APK is assigned a unique ID, which allows the Java code to use the resource during compilation.
arsc is the index of the resource used when executing the application.

https://preview.redd.it/hmmlfwhdxxk51.png?width=1280&format=png&auto=webp&s=b2fe2b6ad998594a5364bb6af6b5cbd880a2452c

D8 and R8

Starting from android studio 3.1 onwards, D8 was made the default compiler.
D8 produces smaller dex files with better performance when compared with the old dx.
R8 is used to compile the code. R8 is an optimized version of D8.
D8 plays the role of dexer that converts class files into DEX files and the role of desugar that converts Java 8 functions into bytecode that can be executed by Android.
R8 further optimizes the dex bytecode. R8 provides features like optimization, obfuscation, remove unused classes.
Obfuscation reduces the size of your app by shortening the names of classes, methods, and fields.
Obfuscation has other benefits to prevent easy reverse engineering, but the goal is to reduce size.
Optimization reduces the DEX file size by rewriting unnecessary parts and inlining.
By doing Desugaring we can use the convenient language features of Java 8 in older devices.
https://preview.redd.it/so424bxwxxk51.png?width=1280&format=png&auto=webp&s=0ad2df5bd194ec770d453f620aae9556e14ed017

Dex and Multidex

R8 outputs one DEX file called classes.dex.
If you are using Multidex, that is not the case, but multiple DEX files will appear, but for the time being, classes.dex will be created.
If the number of application methods exceeds 65,536 including the referenced library, a build error will occur.
The method ID range is 0 to 0xFFFF.
In other words, you can only refer to 65,536, or 0 to 65,535 in terms of serial numbers.
This was the cause of the build error that occurred above 64K.
In order to avoid this, it is useful to review the dependency of the application and use R8 to remove unused code or use Multidex.

https://preview.redd.it/kjyychmzxxk51.png?width=1261&format=png&auto=webp&s=18bea3bf9f7920a4701c2db9714dc53ae6cc5f82

Signing the APK

All APKs require a digital signature before they can be installed or updated on your device.
For Debug builds, Android Studio automatically signs the app using the debug certificate generated by the Android SDK tools when we run.
A debug Keystore and a debug certificate is automatically created.
For release builds, you need a Keystore and upload the key to build a signed app. You can either make an APK file with apkbuilder and finally optimize with zipalign on cmd or have Android Studio handle it for you with the 'Generated Signed Apk option'.

https://preview.redd.it/10m8rjl0yxk51.png?width=1468&format=png&auto=webp&s=078c4ab3f41c7d08e7c2280555ef2038cc04c5b0

References

https://developer.android.com/studio/build
https://github.com/dogriffiths/HeadFirstAndroid/wiki/How-Android-Apps-are-Built-and-Run
https://logmi.jp/tech/articles/322851
https://android-developers.googleblog.com/2017/08/next-generation-dex-compiler-now-in.html
https://speakerdeck.com/devpicon/uncovering-the-magic-behind-android-builds-droidfestival-2018
by androiddevnotes on GitHub
🐣
submitted by jiayounokim to androiddev [link] [comments]

SNAP preview, expected move and spread strike selection

SNAP preview, expected move and spread strike selection
  • Snap (SNAP) reports q2 earnings after the close Tuesday (~4:10pm)
  • Options are pricing an expected move of 12% by this Friday. That is the bulk of the move expected over the next month, which is about 15%.
  • Snap closed higher by about 36% in the day following its most recent earnings (in April)
  • Snap has beaten consensus estimates 7 out of the last 8 times.

https://preview.redd.it/d5e4r4ho28c51.png?width=583&format=png&auto=webp&s=9790a1e378ed329c5d66910d814093260814f70d
Neutral - The first thing to look at is a neutral position, selling to both the bulls and the bears. Here are two neutral trades setting breakevens at or near the expected move. First selling the +21.5/-22.5/-28/+29 Iron Condor (condor chart)
In this case the risk reward is $56 to make $44. If the stock closes anywhere between 22.5 and 28 on Friday it is a max gain. Any close beyond 21.5 or 29 and a max loss. The breakeven is 22.06 on the downside and 28.44 on the upside.
That trade establishes a range of max profit, for those targeting no move at all, with the stock remaining at 25 selling an Iron Butterfly has max profit at the 25 level with profits trailing off towards the expected move and losses beyond: Fly chart
Both of these trades are binary, isolating this week and what is likely to be a mostly one day move tomorrow.
Bullish - For those thinking directionally the expected move can be used to help determine strike selection. Here's a bullish price target looking out a bit further in time, to August expiration: Trade comparison.
In this case both the August long call spread (+25/-29) and the August short put spread (-25/+21) take advantage of multi leg strike selection based on the expected move. The short put spread is "selling to the bears" and is profitable from 22.57 and higher with a max gain if the stock is above 25 on August expiration. The long call spread has a higher breakeven, but by selling the 29 call at a high upside volatility, is much cheaper than an outright 25 call.
Bearish - The same is true for a bearish target in line with the expected move but the short call spread is at a slight disadvantage due to having to buy the upside call at a similar or higher IV than the at the money call sale: Trade comparison
Full post here.
submitted by cclagator to options [link] [comments]

The motion to delay and revise the re-entry plan is not a binary issue. Don't buy the politics.

TLDR:
People are being goaded into taking sides on whether or not we should open schools. This is not the issue at hand when considering the motions made by Mr. Shurr at the most recent Special Session of the school board. The issue at hand is whether the current plan (Published June 30) is the most inventive solution we can offer that minimizes the risk of lifelong disability and/or death for the students and, more immediately, the many high-risk adults who work in the public schools around the country. If you’ve ever been in an American workplace, you know that leaders (especially exhausted ones) can find running out the clock on a decision period more desirable than engaging in critical discussion. With stakes as high as they are, the motions are meant to ensure this does not happen with our public schools.
Here is a link to the most critical 20 minutes of the Special Session of the school board meeting from Tuesday, July 21st.
The Details:
This is a throwaway account, and an attempt at a complete statement of my opinion. This does not reflect anyone’s opinion but my own based on public information. Feel free to share any and all of this if you’d like. I don’t plan to respond to comments or DMs.
A considerable number of parents, students, and teachers (many of whom are at high risk of contracting COVID-19 or live with an elder who is) feel the traditional school model poses too much risk and that we need to pause and revise the plan. Fairly, many people who need the child care/specialized services provided by the schools have voiced their frustration and unwillingness to support such a measure because they believe this must mean that schools will be closed for an extended period of time. This fabricated binary allows an outdated plan to look preferable to pausing and revising because:
All this said, the parents who need their students in school are justified in their attitudes and arguments.
At one of the large high schools
At the other,
Even if you ignore the certain occurrence of some crossover within these categories, this is still less than half of the total student population. A number of these students may still choose to stay home with the online option. Similarly, there are probably students who do not fall into these categories but still need to come to school sometimes for some reason or another. Either way, this suggests that there is an opportunity to serve a MUCH smaller number of students in the building and reduce risk to everyone involved.
I will admit this would be harder to organize at the elementary level, where districting decisions have left some schools in a more difficult situation than others in terms of student needs because some schools have:
Perhaps the lesser risk in general at the elementary level doesn’t demand an alternative-to-traditional model. Perhaps identifying students who need to be in a learning center and finding a way to get them to a less crowded school should be part of the conversation. Regardless, I imagine that some schools will already be operating at a much lower capacity than normal due to the online option while others will be close to full.
All of this should show that this is:
This is what pause and revise is really about. While I’m sure people are exhausted, I refuse to believe that Bloomington has exhausted its creativity, resources, and inventiveness on the current plan especially in light of the totally changed context. If you agree that we can do better, please reach out to the Board of School Trustees and the Monroe County Health Department in time for the last meeting before the school year. The meeting is on Tuesday, July 28.
[[email protected]](mailto:[email protected])
[[email protected]](mailto:[email protected])
[[email protected]](mailto:[email protected])
[[email protected]](mailto:[email protected])
[[email protected]](mailto:[email protected])
[[email protected]](mailto:[email protected])
[[email protected]](mailto:[email protected])
[[email protected]](mailto:[email protected])
[[email protected]](mailto:[email protected])
I have edited for the sake of clarity and organization. I wish I could edit the title u/Smease1 made a great point below.
submitted by Dry-Consequence-539 to bloomington [link] [comments]

The council has voted 14-1 to keep the STV voting system. Opponents may still seek to force a poll (referendum) on a switch to FPP.

Along with another, older gentleman, I made remarks to the DCC public forum (at the start of the meeting members of the public can speak to the council). Here is a written version of what I said:
As you’ll know, there are efforts by some in the community to push for a change to the voting system from the current “single transferable vote” system, back to the traditional “first past the post” voting system. I want to express my concern that, if these efforts were successful, they would result in less competitive elections and a council that was less representative, less diverse, and less legitimate. I want to ask you reject any change.
A lot of public debate on this topic centres on mayoral elections, particularly on the scenario that occurred at the most recent election where a candidate who had less first-preference votes won the election due to being strongly favoured by second and third (etc) preferences. Some people have a bit of a problem with that, but it’s the system working exactly as it’s supposed to. Regardless, I want to move on from that.
The place where the STV debate gets really important is in the Council election. There are a number of criticisms made of the STV, but the most important and serious is that it’s “too complicated”. In response to that:
  1. The demand on voters, to rank the candidates 1 2 3 etc, is only very marginally more complicated than the demand of choosing 3 or 4 or however many candidates in an FPP election
  2. Although the way votes are counted is technically quite sophisticated when there’s more than one winner, the principle is simple: your single vote starts at 1 and flows down through your preferences to prevent it being wasted. i.e. it’s a Single. Transferable. Vote. It’s an accurate name.
For these relatively small disadvantages, you gain enormous advantages:
Firstly, proportional representation. I cannot understate the importance of proportional representation. In a multi-winner ward, FPP becomes the “block vote” and I don’t think many appreciate what an awful voting system this is. If you’ve got some time, have a look through election results of a place with FPP, like Auckland, and look at how many wards and local boards are total wipe-outs.
For example, the first local board in the results goes like this. First four get elected.
1st: C & R: 6223
2nd: C & R: 5910
3rd: C & R: 5639
4th: C & R: 5439
5th: City Vision: 5318
6th: City Vision: 5059
7th: City Vision: 4888
8th: City Vision: 4477
This is not representative of what the voters wanted. This was a competitive election yet one faction gets all four seats. The next election on the list is the same, except the two sides are reversed.
Decision-makers should look like the communities they represent, and within our framework of representative democracy, systems of proportional representation achieve this. STV is the only viable option within the current legislative framework.
Secondly, voter choice. Under STV you can vote for who you want without having to second-guess who the “viable” candidates are. And, even more important, anyone can run for office without being a spoiler.
In conclusion: sure, voting is a bit easier in FPP. But there’s no choice on who to vote for, or a binary choice at best. If you’re not part of the majority, your vote gets wiped out. If you’re part of the majority, your vote counts two, three times more than it should. If you think the ballot paper at the last election was too much, and I can absolutely accept it was a burden and I take that seriously, look at alternative representation arrangements. We could have two wards of seven, or three wards of five. It’d be a bit clumsy but it could be made to work.
That’s a conversation for another day. But for today, if you think every vote should count, please support STV.
submitted by mrjack2 to dunedin [link] [comments]

DnD is an Adventure, not a Scripted Task. Don't Noir DM.

Once I was DMing a game, and our characters encountered a Grick. One player rolled 1 under its AC on an attack roll and I said that they hadn't harmed the grick, but chipped it's scale off and left an opening. The players were very confused, as if they hadn't heard this before. One then piped up.
"No... Shouldn't it just miss?"
I realize that lots of DMs play DnD like binary code, you identify it or you don't. You hit or you miss, you resist the poison or subdue to it. But depending on what the characters roll, you should change it so they can kind of identify the object, they partly resist the poison as it slowly takes over, having them make another saving throw but maybe with disadvantage. Maybe your sword gets stuck in the wall when the doppelganger ducks your swing on a 1, etc. Remember that your shield, armour and weapons aren't industry table and may break. You should if an attack misses a rouge, he dodges it. If your barbarian nearly gets hit, but the roll lands on the last two numbers, maybe his shield gets hit instead. Don't always play by the direct rules, if a character is drunk then maybe he is doing more then just having the "poisined" condition. If you play in this kind of, I'll call it "noir DMing" (for black and white) then the game starts to seem less like a righteous adventure and more like a monotinal task where the party just follows a script. Characters should always have every option available, unless said topic is erotic or random, like kissing a party member or plunging their sword into their heart. Doing this will improve your games, and make it seem realistic.
submitted by Boycott_Goat_Milk to DMAcademy [link] [comments]

The internals of Android APK build process - Article

The internals of Android APK build process - Article

Table of Contents

  • CPU Architecture and the need for Virtual Machine
  • Understanding the Java Virtual Machine
  • Compiling the Source Code
  • Android Virtual Machine
  • Compilation Process to .dex
  • ART over Dalvik
  • Understanding each part of the build process.
  • Source Code
  • Resource Files
  • AIDL Files
  • Library Modules
  • AAR Libraries
  • JAR Libraries
  • Android Asset Packaging Tool
  • resources.arsc
  • D8 and R8
  • Dex and Multidex
  • Signing the APK
  • References
Understanding the flow of the Android APK build process, the execution environment, and code compilation blog post aims to be the starting point for developers to get familiar with the build process of Android APK.

CPU Architecture and the need for Virtual Machine

Unveiled in 2007, Android has undergone lots of changes related to its build process, the execution environment, and performance improvements.
There are many fascinating characteristics in Android and one of them is different CPU architectures like ARM64 and x86
It is not realistic to compile code that supports each and every architecture. This is where Java Virtual Machine is used.
https://preview.redd.it/3pg7jk1tyxk51.png?width=1280&format=png&auto=webp&s=e092e5cb2c97bbb1db34c520d52adae3c354b755

Understanding the Java Virtual Machine

JVM is a virtual machine that enables a computer to run applications that are compiled to Java bytecode. It basically helps us in converting the compiled java code to machine code.
By using the JVM, the issue of dealing with different types of CPU architecture is resolved.
JVM provides portability and it also allows Java code to be executed in a virtual environment rather than directly on the underlying hardware.
But JVM is designed for systems with huge storages and power, whereas Android has comparatively low memory and battery capacity.
For this reason, Google has adopted an Android JVM called Dalvik.

https://preview.redd.it/g978rd1uyxk51.png?width=1280&format=png&auto=webp&s=8b80419343fbab43546c43b06575e8977fbf31d0

Compiling the Source Code

Our Java source code for the Android app is compiled into a .class file bytecode by the javac compiler and executed on the JVM.
For Kotlin source code, when targeting JVM, Kotlin produces Java-compatible bytecode, thanks to kotlinc compiler.
To understand bytecode, it is a form of instruction set designed for efficient execution by a software interpreter.
Whereas Java bytecode is the instruction set of the Java virtual machine.

https://preview.redd.it/0oeon8evyxk51.png?width=1280&format=png&auto=webp&s=32dd4ac3eaf7b25c7d794859ed8db74166382f37

Android Virtual Machine

Each Android app runs on its own virtual machine. From version 1.0 to 4.4, it was 'Dalvik'. In Android 4.4, along with Dalvik, Google experimentally introduced a new Android Runtime called 'ART'.
Android users had the option to choose either Dalvik or ART runtime in Android 4.4.
The .class files generated contains the JVM Java bytecodes.
But Android has its own optimized bytecode format called Dalvik from version 1.0 to 4.4. Dalvik bytecodes, like JVM bytecodes, are machine-code instructions for a processor.

https://preview.redd.it/q7z1rv30zxk51.png?width=217&format=png&auto=webp&s=5717b932dfea7fc32dbcef4c02b1d6cc71713458

Compilation Process to .dex

The compilation process converts the .class files and .jar libraries into a single classes.dex file containing Dalvik byte-codes. This is possible with the dx command.
The dx command turns all of the .class and .jar files together into a single classes.dex file is written in Dalvik bytecode format.
To note, dex means Dalvik Executable.

https://preview.redd.it/ae8hfcd1zxk51.jpg?width=831&format=pjpg&auto=webp&s=7d83a666bb0dd5f6fdb0abdf068407ec465f7924

ART over Dalvik

Since Android 4.4, Android migrated to ART, the Android runtime from Dalvik. This execution environment executes .dex as well.
The benefit of ART over Dalvik is that the app runs and launches faster on ART, this is because DEX bytecode has been translated into machine code during installation, no extra time is needed to compile it during the runtime.
ART and Dalvik are compatible runtimes running Dex bytecode, so apps developed for Dalvik should work when running with ART.
The JIT based compilation in the previously used Dalvik has disadvantages of poor battery life, application lag, and performance.
This is the reason Google created Android Runtime(ART).
ART is based on Ahead - Of - Time (AOT) based compilation process where compilation happens before application starts.
In ART, the compilation process happens during the app installation process itself. Even though this leads to higher app installation time, it reduces app lag, increases battery usage efficiency, etc.
Even though dalvik was replaced as the default runtime, dalvik bytecode format is still in use (.dex)
In Android version 7.0, JIT came back. The hybrid environment combining features from both a JIT compiler and ART was introduced.
The bytecode execution environment of Android is important as it is involved in the application startup and installation process.
https://preview.redd.it/ps7vxwk4zxk51.png?width=1280&format=png&auto=webp&s=0545f928490301650618d9e953b845fd02aadeea

Understanding each part of the process.


https://preview.redd.it/cn5i7c86zxk51.png?width=950&format=png&auto=webp&s=403dfbf470cacee4da1521af6a287319493fff9b

Source Code

Source code is the Java and Kotlin files in the src folder.

Resource Files

The resource files are the ones in the res folder.

AIDL Files

Android Interface Definition Language (AIDL) allows you to define the programming interface for client and service to communicate using IPC.
IPC is interprocess communication.
AIDL can be used between any process in Android.

Library Modules

Library module contains Java or Kotlin classes, Android components, and resources though assets are not supported.
The code and resources of the library project are compiled and packaged together with the application.
Therefore a library module can be considered to be a compile-time artifact.

AAR Libraries

Android library compiles into an Android Archive (AAR) file that you can use as a dependency for an Android app module.
AAR files can contain Android resources and a manifest file, which allows you to bundle in shared resources like layouts and drawables in addition to Java or Kotlin classes and methods.

JAR Libraries

JAR is a Java library and unlike AAR it cannot contain Android resources and manifests.

Android Asset Packaging Tool

Android Asset Packaging Tool (aapt2) compiles the AndroidManifest and resource files into a single APK.
At this point, it is divided into two steps, compiling and linking. It improves performance, since if only one file changes, you only need to recompile that one file and link all the intermediate files with the 'link' command.
AAPT2 supports the compilation of all Android resource types, such as drawables and XML files.
When you invoke AAPT2 for compilation, you should pass a single resource file as an input per invocation.
AAPT2 then parses the file and generates an intermediate binary file with a .flat extension.
The link phase merges all the intermediate files generated in the compile phase and outputs one .apk file. You can also generate R.java and proguard-rules at this time.

resources.arsc

The output .apk file does not include the DEX file, so the DEX file is not included, and since it is not signed, it is an APK that cannot be executed.
This APK contains the AndroidManifest, binary XML files, and resources.arsc.
This resource.arsc contains all meta-information about a resource, such as an index of all resources in the package.
It is a binary file, and the APK that can be actually executed, and the APK that you often build and execute are uncompressed and can be used simply by expanding it in memory.
The R.java that is output with the APK is assigned a unique ID, which allows the Java code to use the resource during compilation.
arsc is the index of the resource used when executing the application.

https://preview.redd.it/us89vl39zxk51.png?width=1280&format=png&auto=webp&s=f326c605c63156c0d0e4142bd198dd75d356894c

D8 and R8

Starting from android studio 3.1 onwards, D8 was made the default compiler.
D8 produces smaller dex files with better performance when compared with the old dx.
R8 is used to compile the code. R8 is an optimized version of D8.
D8 plays the role of dexer that converts class files into DEX files and the role of desugar that converts Java 8 functions into bytecode that can be executed by Android.
R8 further optimizes the dex bytecode. R8 provides features like optimization, obfuscation, remove unused classes.
Obfuscation reduces the size of your app by shortening the names of classes, methods, and fields.
Obfuscation has other benefits to prevent easy reverse engineering, but the goal is to reduce size.
Optimization reduces the DEX file size by rewriting unnecessary parts and inlining.
By doing Desugaring we can use the convenient language features of Java 8 in older devices.

https://preview.redd.it/w7w5ab8azxk51.png?width=1280&format=png&auto=webp&s=73f4a36d45bd3d56a35e3798915bc5430d4edb22

Dex and Multidex

R8 outputs one DEX file called classes.dex.
If you are using Multidex, that is not the case, but multiple DEX files will appear, but for the time being, classes.dex will be created.
If the number of application methods exceeds 65,536 including the referenced library, a build error will occur.
The method ID range is 0 to 0xFFFF.
In other words, you can only refer to 65,536, or 0 to 65,535 in terms of serial numbers.
This was the cause of the build error that occurred above 64K.
In order to avoid this, it is useful to review the dependency of the application and use R8 to remove unused code or use Multidex.

https://preview.redd.it/2wvuxikczxk51.png?width=1261&format=png&auto=webp&s=13df98583a67c9c9fee66ebc6d05c85fa44d3622

Signing the APK

All APKs require a digital signature before they can be installed or updated on your device.
For Debug builds, Android Studio automatically signs the app using the debug certificate generated by the Android SDK tools when we run.
A debug Keystore and a debug certificate is automatically created.
For release builds, you need a Keystore and upload the key to build a signed app. You can either make an APK file with apkbuilder and finally optimize with zipalign on cmd or have Android Studio handle it for you with the 'Generated Signed Apk option'.

https://preview.redd.it/6uw0kcidzxk51.png?width=1468&format=png&auto=webp&s=b7cce799d0f5caae4413a54b995c1f9090a62e9c

References

https://developer.android.com/studio/build
https://github.com/dogriffiths/HeadFirstAndroid/wiki/How-Android-Apps-are-Built-and-Run
https://logmi.jp/tech/articles/322851
https://android-developers.googleblog.com/2017/08/next-generation-dex-compiler-now-in.html
https://speakerdeck.com/devpicon/uncovering-the-magic-behind-android-builds-droidfestival-2018
by androiddevnotes on GitHub
🐣
submitted by jiayounokim to android_devs [link] [comments]

Under-represented and overlooked: Māori and Pasifika scientists in Aotearoa New Zealand’s universities and crown-research institutes

https://www.tandfonline.com/doi/full/10.1080/03036758.2020.1796103

"Under-represented and overlooked: Māori and Pasifika scientists in Aotearoa New Zealand’s universities and crown-research institutes

Tara G. McAllister ,Sereana Naepi📷,Elizabeth Wilson📷,Daniel Hikuroa📷 &Leilani A. Walker

ABSTRACT

This article provides insights into the ethnicity of people employed in Aotearoa New Zealand’s publicly-funded scientific workforce, with a particular focus on Māori and Pasifika scientists. We show that between 2008 and 2018, Māori and Pasifika scientists were severely under-represented in Aotearoa New Zealand’s universities and crown-research institutes. Despite espousals by these institutions of valuing diversity, te Tiriti o Waitangi and Māori research, there have been very little changes in the overall percentage of Māori and Pasifika scientists employed for a period of at least 11 years. Notably, one university reported having not employed a single Māori or Pasifika academic in their science department from 2008 to 2018. We highlight the urgent need for institutions to improve how they collect and disseminate data that speaks to the diversity of their employees. We present data that illustrate that universities and crown-research institutes are failing to build a sustainable Māori and Pasifika scientific workforce and that these institutions need to begin to recruit, retain and promote Māori and Pasifika scientists.

Introduction

In 2018, Dr Megan Woods (Minister of Research, Science and Innovation) launched the Ministry of Business, Innovation and Employment’s (MBIE) diversity in science statement, which states that ‘Diversity is vital for our science system to realise its full potential’ (MBIE 2018). Whilst this statement is a step towards raising awareness of the importance of diversity in science it needs to be followed by institutional changes, targeted programmes and directed responses from institutions. A vital component of achieving the aspirations espoused in this statement includes open reporting on diversity of ‘applicants, award holders, and advisory, assessment and decision making bodies’ (MBIE 2018). In two recent papers, McAllister et al. (2019) and Naepi (2019) spoke to the lack of diversity in Aotearoa New Zealand 1 ’s eight universities and provided evidence of the severe under-representation of Māori and Pasifika scholars, who comprise 16.5% and 7.5% respectively of the total population of Aotearoa. The authors showed that Māori and Pasifika comprise 4.8% and 1.7% respectively of academics, despite the espousals by universities of valuing diversity and their obligations to equity as outlined in te Tiriti o Waitangi (McAllister et al. 2019; Naepi 2019). The data used in these two studies, obtained from the Ministry of Education (MoE), provided information on the ethnicity of academic staff university wide and was not disaggregated by faculty. Consequently, data on the number of Māori and Pasifika academics in each faculty or department is currently not openly available. Previous research has shown that very few Māori academics exist outside of Māori departments and it remains difficult to access quantitative data on their lived experience as universities continue to silence reports (Kidman et al. 2015; UoO date unknown).
To ensure that the aspirations championed within MBIE’s diversity statement can be met, we first need open and accurate reporting on the diversity of people employed within Aotearoa New Zealand’s scientific workforce and there is currently a significant gap of openly available data that investigate this. Some annual reports and equity profiles of crown-research institutes (CRIs) and universities do contain selected ethnicity data (i.e. MWLR 2018; UoA 2018). However, these reports do not always present data in a meaningful and consistent way and are not always publically available. For example, the University of Otago’s annual report does not contain any information on the ethnicity of staff and instead focuses only on gender of staff and the ethnicity of students (UoO 2018). Instead, the ethnicity data for staff is presented in the equity report, which is only available to staff and access must be requested from the Head of Organisational Development (UoO date unknown).
A survey of Aotearoa New Zealand’s scientists and technologists in 2008 provides the most recent quantitative indication of the diversity of Aotearoa New Zealand’s scientific workforce, despite being conducted 12 years ago (Sommer 2010). The author indicated that there was very little change in ethnicity of Aotearoa New Zealand’s scientific workforce between the 1996 and 2008 surveys, with ‘European’ scientists making up 82.3% and 80.9% respectively (Sommer 2010). According to the author, there was a ‘modest increase’ in Māori scientists from 0.7% (1996) to 1.7% (2008) and this increase ‘represents a glimmer of success for those who have sought to develop policies to bring more Māori into the science and technology workforce’ (Sommer 2010, p. 10). However, an increase of 1% over a period of 15 years (i.e. an average increase of 0.07% per year) should be viewed as a significant failure. The percentage of Pasifika scientists also increased very slightly from 0.5% in 1996 to 0.6% in 2010 (Sommer 2010). McKinley (2002, p. 109) provided an insight into the extremely low numbers of Māori women employed by CRIs in 1998:
‘Of the 3,839 people employed by seven Crown Research Institutes (CRIs) in New Zealand, 57 women or approximately 1.5% of the total identified as Māori women. At the time these data were collected in 1998 there were no Māori women in management positions, two were categorised as scientists, 15 as science technicians, and 40 as ‘support’ staff that includes cafeteria staff, administration staff and cleaners’
The data presented by both McKinley (2002) and Sommer (2010) highlight the urgent need for institutions and government to move away from ‘business as usual’ and make a serious commitment to firstly collecting data on diversity, openly and transparently presenting it and secondly increasing the hiring, promoting and retention of Māori and Pasifika scientists.
The present paper aims to begin to address the gap in knowledge by collating data and investigating how diverse Aotearoa New Zealand’s scientific workforce is. An intersectional lens must be applied when thinking critically about diversity and equity, however policies, actions and research often privilege gender (i.e. Bhopal and Henderson 2019; Brower and James 2020) over ethnicity whilst ignoring other intersectional identities that go beyond white, cis women. Here, we focus on the intersectional identities of Māori and Pasifika scientists, while acknowledging that people who have other intersectional identities including those with disabilities, LGBTQIA, non-binary and women of colour are likely to be disproportionately affected and disadvantaged within Aotearoa New Zealand’s science system, which like universities, was arguably created by and made for white, cis men (Ahmed 2012; Osei-Kofi 2012; Naepi et al. 2017; Akenahew and Naepi 2015). This paper examines the current diversity of Aotearoa New Zealand’s scientific workforce, with a particular focus on Māori and Pasifika. We will address the following questions:
  1. How many Māori and Pasifika scientists are employed in Aotearoa New Zealand’s universities and CRIs?
  2. How has the percentage of Māori and Pasifika scientists in these institutions changed between 2008 and 2018?

Methods

Data collection

Data was requested from universities and CRIs by emailing key individuals within each organisation in 2019. Data from 2008 to 2018 on the percentage of scientists, relative to both the total headcount and the total number of full-time equivalents (FTEs) for each recorded ethnicity employed was requested from CRIs and universities. Both the nature of responses to this request and the time it took to receive a response varied among institutions. Responses from institutions ranged from an openness and willingness to contribute data to this project to hostility and racist remarks. Several institutions did not respond to multiple email requests. A subsequent email sent by a Principal Advisor from the Office of the Prime Minister’s Chief Science Advisor elicited a prompt response from all remaining institutions. After initial conversations with staff from HR departments and university management, it was agreed that all institutions would remain anonymous and we believe that this contributed significantly to increasing the willingness of institutions to contribute data. Overall, data was obtained from 14 out of 15 of Aotearoa New Zealand’s universities and CRIs. At most of these institutions staff self-declare their ethnicities and are given multiple choices, where data was provided for multiple ethnicities we used the first reported ethnicity,

Data from universities

Seven out of eight universities contributed data directly to this project, whereas data for university B was extracted from annual reports. Ethnicity data in the form of FTEs and headcount data was provided by most universities. Māori and Pasifika academics are more likely to be employed on contracts of less than one FTE compared to Pākehā academics (unpublished data). We therefore present the percentage of FTEs of staff for each recorded ethnicity, rather than headcount data as it is likely to be a more accurate measure of diversity. Recorded ethnicity groups differed among some universities, mainly in the fact that some distinguished between ‘European’ and ‘NZ European/Pākehā’, whereas at others these two ethnicities were combined.
It is important to note that the data from universities presented in this paper includes academic staff and excludes research staff, including post-doctoral fellows and laboratory technicians. Data on the number of scientists employed at universities also only includes scientists employed in science departments (i.e. excludes Māori scientists in health departments). However, a recent paper published by Naepi et al. (2020) showed that in 2017, there were only 55 Māori and 20 Pasifika postdoctoral fellows across all faculties in all of Aotearoa New Zealand’s universities. The number of Māori and Pasifika postdoctoral fellows employed in science faculties is, therefore, likely to be very small. Academic staff includes other academic staff, senior tutors, tutors, tutorial assistants, lecturers, senior lecturers, associate professors and professors. Previous research has shown that a large proportion of Māori and Pasifika academics are employed as tutors and other academic staff rather than in permanent senior academic positions (see Naepi 2019), so this is also likely to be the case within science faculties.
Concerningly, two universities (university E and H) were unable to provide data for the requested 11-year period (i.e. from 2008 to 2018). Upon querying this with human resource (HR) departments, their reasons included but were not limited to the following:

Data from crown-research institutes

Data, in some shape or form, was obtained from six out of seven of Aotearoa New Zealand’s CRIs. Obtaining accurate and consistent temporal data from CRIs was, despite their willingness, much more difficult than from universities. The MoE requires certain ethnicity data from universities in a particular format (see MoE date unknown), however the diversity of staff employed at Aotearoa New Zealand’s seven CRIs is currently not required by an external organisation. Most CRIs were unable to provide FTE data but were able to provide headcount data, consequently we present the headcount data in this report. Because the data from CRIs was highly variable, we were not prescriptive about how they defined a scientist, however at most institutions this included post-doctoral fellows and scientists.
Data on the percentage of Māori and Pasifika scientists employed from 2008 to 2018 could only be obtained from four out of seven of the CRIs. CRI F could only provide ethnicity for staff that were recent hires from 2016 to 2018, meaning we are unable to differentiate between science and non-science staff and data on staff employed prior to 2016 was unavailable. CRI E could only provide data for 2019, the year that we had asked for it, due to their HR system overwriting data and therefore having no historical record of staff ethnicity.
The ethnicity data from CRIs, with the exception of CRI B, can only be viewed as indicative due to inconsistencies in how CRIs collect data. Data from most institutions was therefore not conducive to any temporal or statistical analyses. For example, at CRI A over the 11-year period, the ethnicity categories offered to staff changed four times. Māori and Pasifika were consistently given as options, which provides some level of confidence in CRI A’s ethnicity data.

Results

Māori scientists employed in Aotearoa New Zealand’s universities

Before even considering the data presented below, we must acknowledge and highlight that science faculties within universities are generally not safe and inclusive environments for Māori and Pasifika academic staff. Reasons for this include that being the only Indigenous person in a faculty puts that one under extreme pressure to help colleagues, indigenise curriculum, support Indigenous students while also advancing their own career (Mercier et al. 2011; Kidman et al. 2015). It is well established that the job satisfaction of Māori academics is influenced by their proximity to other Māori academics (Mercier et al. 2011; Kidman et al. 2015). The interdisciplinary work of Māori scientists also often does not align with what the academy and their Pākehā counterparts define as ‘science’ and many scholars have explored this (see for example, McKinley 2005; Mercier 2014; Hikuroa 2017). Consequently, of the few Māori scientists that exist and survive within academia, several are employed outside of science faculties (see for example, Mercier 2014). This data therefore is likely to very slightly underestimate the numbers of Māori scientists within the academy. Furthermore, in the present paper we focus on Māori and Pasifika scientists in science faculties but there will also be Māori and Pasifika scientists in social science and humanities and health faculties, which will not be captured by the data reported below.
Māori are under-represented in science faculties at all of Aotearoa New Zealand’s eight universities (Table 1). University A had the highest level of representation, which may be attributed to the science faculty being combined with another discipline at this particular university (Table 1). From 2008 to 2018, University D has never employed a Māori academic in their science faculty (Table 1). Māori comprised less than 5% of the total FTEs in science faculties at all other universities between 2008 and 2018, the averages were 4.3, 1.4, 1.6, 3.7 and 0.6% respectively at University B, C, E, F and H (Table 1). Importantly, there were no significant differences between the percentage of Māori FTEs in 2008 and 2018 (paired t-test: t10 = −0.24, p = 0.82). Thus, meaning that over 11 years there has been no improvement in Māori representation in science faculties (Table 1).

Table 1. The percentage of Māori and Pasifika full-time equivalents (FTEs) of academic staff in science faculties at each of Aotearoa New Zealand’s eight universities. University A and G both have a combined faculty (i.e. science and another discipline) whereas all other universities have separate faculties and data is solely for science faculties. University E was unable to provide FTE data prior to 2011 and university H was only able to provide data from 2015.

CSVDisplay Table

Māori scientists employed in Aotearoa New Zealand’s crown-research institutes

Promisingly, and in contrast with patterns of Māori scientists at universities the percentage of Māori scientists (i.e. of the total headcount) employed by CRIs has increased from 2008 to 2018 at half (2/4) of the CRIs that were able to provide temporal data (Table 2). At CRI A, Māori comprised 1.8% of the scientists employed in 2008 and this steadily increased to 3.8% in 2018 (Table 2). Similarly at CRI B, the percentage of Māori scientists have increased from 3.4% to 7.8% respectively (Table 2). At CRI C, Māori have comprised between 0.01% and 0.03% of scientists employed over a period of 11 years and at CRI D it has varied between 0% and 0.6% (Table 2).

Table 2. The percentage of Māori and Pasifika scientists of the total headcount employed by each of Aotearoa New Zealand’s crown-research institutes. CRI E could only provide data for 2019 and CRI F only had data for new recruits from 2016–2018. CRI G did not contribute data to this research.

CSVDisplay Table
Certain CRIs are doing better than others, it is however important to note, particularly given CRIs outward espousals of commitments to and valuing ‘Māori research’ and mātauranga (i.e. GNS 2018), that Māori remain under-represented in all CRIs in Aotearoa New Zealand, including CRI A and B (Table 2). Additionally, the fact that three out of seven of the CRIs could not provide sufficient data suggests that these institutions have a lot of work to do in collecting data on the diversity of the staff that they employ.

Pasifika scientists employed in Aotearoa New Zealand’s universities and crown-research institutes

There is currently an absence of research into the experiences of Pasifika scientists in Aotearoa-New Zealand’s science system. However like Māori scientists, Pasifika scientists are likely to be marginalised and under-valued within the current science system. Pasifika scientists in both universities and CRIs are extremely under-represented (Tables 1 and 2). Notably of the 11 institutions (inclusive of universities and CRIs) that provided data only three reported having Pasifika representation exceeding 1% of either the total headcount or total number of FTEs in more than one year (Tables 1 and 2). Four institutions (one university and three CRIs) reported having employed zero Pasifika scientists for 11 consecutive years (Tables 1 and 2). Importantly, there were no significant differences between the percentage of Pasifika FTEs in universities in 2008 and 2018 (paired t-test: t8 = 0.36, p = 0.73). Thus, meaning that over 11 years there has been no improvement in Pasifika representation in science faculties (Table 2).
The patterns in the percentage of both Māori and Pasifika scientists employed at university G were very different from all other institutions (Table 1). Firstly, university G was the only university that in some years employed more Pasifika than Māori scientists (Table 1). In 2008, 7.4% of FTEs in the science faculty of university G belonged to Pasifika scientists, which was the highest recorded in all eight institutions over 11 years (Table 1). However, Pasifika scientists in this faculty had only 4.4 FTEs in 2008, meaning that 7.4% equated to five Pasifika staff (data not shown).

The diversity of scientists employed in science faculties in Aotearoa New Zealand’s universities

Between 2008 and 2018, the majority of academics in the Computing and Mathematical Sciences, Engineering and Science departments at university D were European comprising between 58.7% and 85.2% of the total FTEs (Figure 1(A)). University D distinguishes between ‘European’ and ‘New Zealand European/Pākehā’ and the data presented in Figure 1(A) suggests that not many academics in these departments associate with the latter group. Thus, suggesting that most academics employed within these departments are from overseas. In these departments (i.e. Computing and Mathematical Sciences, Engineering and Science) between 2008 and 2018 there was a consistent increase in the percentage of FTEs of Asian ethnicities (12.3% increase in Computing and Mathematical Sciences, 6.8% in Engineering, 2.4% in Science; Figure 1(A)).
Figure 1. (A) The percentage of full-time equivalents (FTEs) for each recorded ethnicity in three science faculties at university D in2008 and 2018 and (B) the percentage of Māori and Pasifika FTEs in those three faculties for academic staff from 2008–2018.
Note: In both the Engineering and Science departments there were no Māori or Pasifika employed between 2008 and 2018.
📷Display full size
The data provided by university D clearly illustrates a severe lack of Māori and Pasifika academic staff representation in sciences faculties (Figure 1(B)). It shows that in two of the three departments, there have never been any Māori academics employed (Figure 1(B)). Furthermore, in those three departments no Pasifika academic staff have been employed in 11 years (2008–2018). Māori academics have comprised 4.1%–7.5% of the total FTEs in the Computing and Mathematical Science department (Figure 1).
NZ European/Pākehā formed the majority (52.8%–63.6%) of academic staff employed in the science faculty of university B and this percentage has decreased by 11.8% between 2008 and 2018 (Figure 2). People who did not declare their ethnicity (unknown) comprised a small percentage (average = 3.2% of the total FTEs; Figure 2). European academics made up on average 20% of the total FTEs employed in this faculty between 2008 and 2018 (Figure 2). Māori and Pasifika scientists were under-represented, comprising on average 6.0% and 2.6% respectively (Figure 2). The percentage of Māori FTEs has decreased from 7.3% (2008) to 6.4% (2018), whereas the percentage Pasifika FTEs has increased from 2.0% to 4.8% over the 11-year period (2008–2018; Figure 2). However, there was no statistically significant difference between both Māori and Pasifika FTEs over time (p > 0.05).
Figure 2. The percentage offull-time equivalents (FTEs) for each recorded ethnicity at university B from 2008 to 2018.
Note: University B has a combined science faculty (i.e. science and another discipline).
📷Display full size
The importance of department by department analysis of universities ethnicity data is highlighted when comparing the percentage of Māori FTEs university-wide and the science faculty (Figure 3). The average percentage of Māori FTEs university wide at university F was 4.7% from 2008 to 2018, whereas it was consistently lower within the science faculty (Figure 3). Similarly, representation of Pasifika academics in the science faculty at university F was much lower compared to the entire university (Figure 4). The average between 2008 and 2018 was 1.5% of Pasifika FTEs across the university whereas it was only 0.4% in the science faculty (Figure 4).
Figure 3. The percentage of Māori full-time equivalents (FTEs) of academics in both the science facultyand across the entire university at university F.
Note: y axis is limited to 15%.
📷Display full size
Figure 4. The percentage of Pasifika full-time equivalents (FTEs) for academic staff in both the science faculty across the entire university at university F.
Note: The y axis is limited to 15%.
📷Display full size

The diversity of scientists employed in Aotearoa New Zealand’s crown-research institutes

CRI B was the only CRI that was able to provide relatively good quality, temporal data. Data from this institution indicated that African scientists made up approximately 1% of scientists employed from 2016 to 2018 and both Asian and Australian scientists have made up on average 5.4% and 5.0% respectively of the total headcount from 2008 to 2018 (Figure 5). The percentage of European scientists has increased steadily from 16.1% in 2008 to 23.5% in 2018 (Figure 5). The percentage of Māori scientists employed has also increased from 3.4% in 2008 to 7.8% in 2018 (Figure 5). Although this increase is promising, Māori remain under-represented within this institution. Interestingly, the percentage of NZ European/Pākehā employed at CRI B has decreased from 64.9% (2008) to 45.3% (2018; Figure 5). This may speak to the increasing value the science system places on international expertise, whereby scientists from overseas or with international experience are valued more than those from Aotearoa, which is driven in a large part by global ranking systems that value international staff recruitment (Stack 2016). This is driven largely by the increasing importance placed on international university ranking systems. Importantly, scientists coming from overseas will likely have very little understanding of things that are highly important within the context of Aotearoa (e.g. te Tiriti o Waitangi). Considering the data presented, urgent action is required to address this apparent selection of international scientists over Māori and Pasifika scientists. Rather than copying and pasting a blanket statement in job advertisements of empty words like the following: ‘The University of Canterbury actively seeks to meet its obligation under the Treaty of Waitangi | Te Tiriti o Waitangi’ (UoC date unknown), CRIs and universities need to be actively recruiting Māori and Pasifika scientists and hence need to consider the following questions when hiring new staff:
  1. How is this person likely to contribute to the uplifting of Māori communities in a meaningful way?
  2. Do they have any experience working with Indigenous communities?
  3. What is their understanding of Te Tiriti o Waitangi and the Treaty of Waitangi?
  4. How do you see your role as supporting our institution's commitments to Pasifika communities?
Figure 5. Percentage of the total headcount for each recorded ethnicity at crown-research institute (CRI) B from 2008 to 2018.
Note: Ethnicity groups in this graph differ from previous graphs.
📷Display full size
CRI E were only able to supply data in the year that it was requested (i.e. 2019) due to their HR systems. In 2019, this particular CRI employed zero Pasifika scientists and 1.6% of scientists were Māori (Figure 6). The majority of scientists employed at CRI E in 2019 were NZ European/Pākehā (55.0% NZ European) and 21.5% were ‘European’ (Figure 6).
Figure 6. The percentage of the total headcount of each recorded ethnicity at crown-research institute (CRI) E in 2019.
Note: Ethnicity groupings differ from previous graphs.
📷Display full size
CRI F only began collecting ethnicity data, despite previously collecting gender data, in 2016. Their data was also only collected for new recruits. We were, therefore, unable to disaggregate science staff from general and non-science staff. From 2016 to 2018 the majority (59%–66%) of new recruits were ‘NZ Europeans’. In 2017, 14% of new recruits were Pasifika whereas in 2016 and 2018 there were no Pasifika recruits. Māori comprised between 2% of new recruits in 2017 and 2018 but 8% in 2016 (data not shown)...."
submitted by lolpolice88 to Maori [link] [comments]

Everything about Advantages and Disadvantages of Trading ... Advantages And Disadvantages Of Binary Options Trading ... Advantages and Disadvantages of Binary Options Trading Getting The Advantages and Disadvantages of Trading Binary ... Binary Options Pros & Cons Advantages and Disadvantages of Binary Options Trading

Disadvantages Of Trading Binary Options. While there are plenty of advantages associated with binary options, like any other trading vehicle, they also come with some disadvantages. Those include ... Disadvantages of binary options . Fixed Earning: If is positive to know how much you will earn before you invest your money on a Binary Option, in case of “strong” movements in Prices is not so positive. Because when there is a strong movement, the Price of the Asset can greatly increase but with the Binary Options the earning is already fixed: it means that in case that the Price of a ... Disadvantages Of Binary Options. 847 Words 4 Pages. What are Binary Options? Binary Options are a type of option that is based only on two results, yes and no. Which explains the word binary, meaning composed of only two variables. Trade futures are on the market, not inside the market. If a certain commodity, stock index, or currency, has reached a certain price at a specific amount of time ... Disadvantages of Binary Options. Binary options are often more expensive to trade than your regular forex positions. This fact is something that first comes to attention to every knowledgeable trader. The general idea of traders is that if they are investing money and are risking losing it, they want to either invest a lot to make a lot or not make a lot, they will not invest as much. From ... Disadvantages of Binary Options. The biggest disadvantage of binary options is that one can lose his or her entire investment in one go and it gives no second opportunity, so for example in the above case if the individual is wrong and NASDAQ close in green than the investor will stand to lose the entire $500 in a single day. Another disadvantage of these options is that they are unregulated ... Advantages and Disadvantages of Trading Binary Options. 48.2k Views. One of the newest and most popular derivatives currently offered in the world of currency trading is the binary option. Binary options are simple to trade and limit the risk of the trader’s investment to the amount of money paid to purchase the option. Nevertheless, this type of derivative does not come without its ... Consider the following advantages and disadvantages when trading Binary Options: Advantages Risk control – With binary options the return on initial investment is fixed from the beginning, thus the amount of possible profit or loss is well known.Meaning you’ll never lose more than what you expected and can determine your risk as completely as possible. Trading binary options, as with any other form of investment, comes with its own unique advantages and disadvantages. However, unlike most other investment vehicles, it comes with returns that tend to be over 70%. With that said, if you’re willing to pay attention to market indicators, learn how the binary options market works, and try your hand at a trade or two; you may be surprised at the ... DISADVANTAGE OF BINARY OPTIONS If you are bent on losing money, tell me which sounds better to you… Regulation – First and foremost, regulation is a huge one. The truth is that binary options trading is a very new concept. As a result, regulatory ... Binary option trading, like any other businesses and investments, comes with lots of advantages and disadvantages. However, it comes with returns that tend to be as high as 90%. For you to have the ability to profit, pay attention to market indicators, learn how the binary options market works, and try your hand at a trade or two.

[index] [20076] [27880] [18304] [29267] [19642] [27511] [15972] [5590] [25253] [4836]

Everything about Advantages and Disadvantages of Trading ...

Click To This Article: https://bit.ly/30hFWjf - Getting The Advantages and Disadvantages of Trading Binary Options To Work exchanges, the cost of a binary is... More Tips Here: https://bit.ly/3inkuPV - Everything about Advantages and Disadvantages of Trading Binary Options Trade binary choices on a wide variety of we... The main disadvantage of trading binary options is the level of the Brokers fee. You pay the broker fees since they included in the spread. You buy the options contracts from the broker, if you... Read all about Advantages And Disadvantages Of Binary Options Trading: https://financialnerd.com/advantages-and-disadvantages-of-binary-options-trading/Binary o... This video is unavailable. Watch Queue Queue. Watch Queue Queue Advantages and Disadvantages of Binary Options Trading - Duration: 2:23. Dennis Medler 122 views. 2:23. How to Trade Options on Robinhood for Beginners in 2020 ...

https://binary-optiontrade.sweatungrap.ml