IKATAN ALUMNI SMPN 233 JAKARTA BLOG

Salam semua para alumni SMPN 233 Jakarta, Apa kabar ?
Blog ini gue buat dengan tujuan biar kita bisa saling sharing dan juga bila ada informasi terbaru yang isinya kumpul - kumpul bisa dengan segera disampikan, Buat para alumni yang jauh mengerti tentang blog dan sebagainya, bisa bantu gue memperbaiki blog ini dan menjadikannya lebih indah.....Gue tunggu partisipasinya.

salam kehangatan

see zhiunk

28 Mar 2020

CGDD4303 Educational And Serious Games Fall 2019 Showcase!

CGDD4303 Educational and Serious Games is taught by Dr. Joy Li.  The student project showcase was held on 12/9 night.  All projects were in collaboration with local educators, including instructors from local schools, Carter's Lake Museum, Augusta University Medicine School, etc.  The deliverables are game prototypes designed for educational purposes, including STEM education, museum visitings, or some interesting topics.  All collaborators including kids who participated in art design and playtesting stages were all invited to the showcase.  Some kids volunteered in student presentations to help demos.  Collaborators were all very happy and showed interest in collaborating again in the following semesters.



Tech Book Face Off: Data Smart Vs. Python Machine Learning

After reading a few books on data science and a little bit about machine learning, I felt it was time to round out my studies in these subjects with a couple more books. I was hoping to get some more exposure to implementing different machine learning algorithms as well as diving deeper into how to effectively use the different Python tools for machine learning, and these two books seemed to fit the bill. The first book with the upside-down face, Data Smart: Using Data Science to Transform Data Into Insight by John W. Foreman, looked like it would fulfill the former goal and do it all in Excel, oddly enough. The second book with the right side-up face, Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow by Sebastian Raschka and Vahid Mirjalili, promised to address the second goal. Let's see how these two books complement each other and move the reader toward a better understanding of machine learning.

Data Smart front coverVS.Python Machine Learning front cover

Data Smart

I must admit; I was somewhat hesitant to get this book. I was worried that presenting everything in Excel would be a bit too simple to really learn much about data science, but I needn't have been concerned. This book was an excellent read for multiple reasons, not least of which is that Foreman is a highly entertaining writer. His witty quips about everything from middle school dances to Target predicting teen pregnancies were a great motivator to keep me reading along, and more than once I caught myself chuckling out loud at an unexpectedly absurd reference.

It was refreshing to read a book about data science that didn't take itself seriously and added a bit of levity to an otherwise dry (interesting, but dry) subject. Even though it was lighthearted, the book was not a joke. It had an intensity to the material that was surprising given the medium through which it was presented. Spreadsheets turned out to be a great way to show how these algorithms are built up, and you can look through the columns and rows to see how each step of each calculation is performed. Conditional formatting helps guide understanding by highlighting outliers and important contrasts in the rows of data. Excel may not be the best choice for crunching hundreds of thousands of entries in an industrial-scale model, but for learning how those models actually work, I'm convinced that it was a worthy choice.

The book starts out with a little introduction that describes what you got yourself into and justifies the choice of Excel for those of us that were a bit leery. The first chapter gives a quick tour of the important parts of Excel that are going to be used throughout the book—a skim-worthy chapter. The first real chapter jumps into explaining how to build up a k-means cluster model for the highly critical task of grouping people on a middle school dance floor. Like most of the rest of the chapters, this one starts out easy, but ramps up the difficulty so that by the end we're clustering subscribers for email marketing with a dozen or so dimensions to the data.

Chapter 3 switches gears from an unsupervised to a supervised learning model with naïve Bayes for classifying tweets about Mandrill the product vs. the animal vs. the Mega Man X character. Here we can see how irreverent, but on-point Foreman is with his explanations:
Because naïve Bayes is often called "idiot's Bayes." As you'll see, you get to make lots of sloppy, idiotic assumptions about your data, and it still works! It's like the splatter-paint of AI models, and because it's so simple and easy to implement (it can be done in 50 lines of code), companies use it all the time for simple classification jobs.
Every chapter is like this and better. You never know what Foreman's going to say next, but you quickly expect it to be entertaining. Case in point, the next chapter is on optimization modeling using an example of, what else, commercial-scale orange juice mixing. It's just wild; you can't make this stuff up. Well, Foreman can make it up, it seems. The examples weren't just whimsical and funny, they were solid examples that built up throughout the chapter to show multiple levels of complexity for each model. I was constantly impressed with the instructional value of these examples, and how working through them really helped in understanding what to look for to improve the model and how to make it work.

After optimization came another dive into cluster analysis, but this time using network graphs to analyze wholesale wine purchasing data. This model was new to me, and a fascinating way to use graphs to figure out closely related nodes. The next chapter moved on to regression, both linear and non-linear varieties, and this happens to be the Target-pregnancy example. It was super interesting to see how to conform the purchasing data to a linear model and then run the regression on it to analyze the data. Foreman also had some good advice tucked away in this chapter on data vs. models:
You get more bang for your buck spending your time on selecting good data and features than models. For example, in the problem I outlined in this chapter, you'd be better served testing out possible new features like "customer ceased to buy lunch meat for fear of listeriosis" and making sure your training data was perfect than you would be testing out a neural net on your old training data.

Why? Because the phrase "garbage in, garbage out" has never been more applicable to any field than AI. No AI model is a miracle worker; it can't take terrible data and magically know how to use that data. So do your AI model a favor and give it the best and most creative features you can find.
As I've learned in the other data science books, so much of data analysis is about cleaning and munging the data. Running the model(s) doesn't take much time at all.
We're into chapter 7 now with ensemble models. This technique takes a bunch of simple, crappy models and improves their performance by putting them to a vote. The same pregnancy data was used from the last chapter, but with this different modeling approach, it's a new example. The next chapter introduces forecasting models by attempting to forecast sales for a new business in sword-smithing. This example was exceptionally good at showing the build-up from a simple exponential smoothing model to a trend-corrected model and then to a seasonally-corrected cyclic model all for forecasting sword sales.

The next chapter was on detecting outliers. In this case, the outliers were exceptionally good or exceptionally bad call center employees even though the bad employees didn't fall below any individual firing thresholds on their performance ratings. It was another excellent example to cap off a whole series of very well thought out and well executed examples. There was one more chapter on how to do some of these models in R, but I skipped it. I'm not interested in R, since I would just use Python, and this chapter seemed out of place with all the spreadsheet work in the rest of the book.

What else can I say? This book was awesome. Every example of every model was deep, involved, and appropriate for learning the ins and outs of that particular model. The writing was funny and engaging, and it was clear that Foreman put a ton of thought and energy into this book. I highly recommend it to anyone wanting to learn the inner workings of some of the standard data science models.

Python Machine Learning

This is a fairly long book, certainly longer than most books I've read recently, and a pretty thorough and detailed introduction to machine learning with Python. It's a melding of a couple other good books I've read, containing quite a few machine learning algorithms that are built up from scratch in Python a la Data Science from Scratch, and showing how to use the same algorithms with scikit-learn and TensorFlow a la the Python Data Science Handbook. The text is methodical and deliberate, describing each algorithm clearly and carefully, and giving precise explanations for how each algorithm is designed and what their trade-offs and shortcomings are.

As long as you're comfortable with linear algebraic notation, this book is a straightforward read. It's not exactly easy, but it never takes off into the stratosphere with the difficulty level. The authors also assume you already know Python, so they don't waste any time on the language, instead packing the book completely full of machine learning stuff. The shorter first chapter still does the introductory tour of what machine learning is and how to install the correct Python environment and libraries that will be used in the rest of the book. The next chapter kicks us off with our first algorithm, showing how to implement a perceptron classifier as a mathematical model, as Python code, and then using scikit-learn. This basic sequence is followed for most of the algorithms in the book, and it works well to smooth out the reader's understanding of each one. Model performance characteristics, training insights, and decisions about when to use the model are highlighted throughout the chapter.

Chapter 3 delves deeper into perceptrons by looking at different decision functions that can be used for the output of the perceptron model, and how they could be used for more things beyond just labeling each input with a specific class as described here:
In fact, there are many applications where we are not only interested in the predicted class labels, but where the estimation of the class-membership probability is particularly useful (the output of the sigmoid function prior to applying the threshold function). Logistic regression is used in weather forecasting, for example, not only to predict if it will rain on a particular day but also to report the chance of rain. Similarly, logistic regression can be used to predict the chance that a patient has a particular disease given certain symptoms, which is why logistic regression enjoys great popularity in the field of medicine.
The sigmoid function is a fundamental tool in machine learning, and it comes up again and again in the book. Midway through the chapter, they introduce three new algorithms: support vector machines (SVM), decision trees, and K-nearest neighbors. This is the first chapter where we see an odd organization of topics. It seems like the first part of the chapter really belonged with chapter 2, but including it here instead probably balanced chapter length better. Chapter length was quite even throughout the book, and there were several cases like this where topics were spliced and diced between chapters. It didn't hurt the flow much on a complete read-through, but it would likely make going back and finding things more difficult.

The next chapter switches gears and looks at how to generate good training sets with data preprocessing, and how to train a model effectively without overfitting using regularization. Regularization is a way to systematically penalize the model for assigning large weights that would lead to memorizing the training data during training. Another way to avoid overfitting is to use ensemble learning with a model like random forests, which are introduced in this chapter as well. The following chapter looks at how to do dimensionality reduction, both unsupervised with principal component analysis (PCA) and supervised with linear discriminant analysis (LDA).

Chapter 6 comes back to how to train your dragon…I mean model…by tuning the hyperparameters of the model. The hyperparameters are just the settings of the model, like what its decision function is or how fast its learning rate is. It's important during this tuning that you don't pick hyperparameters that are just best at identifying the test set, as the authors explain:
A better way of using the holdout method for model selection is to separate the data into three parts: a training set, a validation set, and a test set. The training set is used to fit the different models, and the performance on the validation set is then used for the model selection. The advantage of having a test set that the model hasn't seen before during the training and model selection steps is that we can obtain a less biased estimate of its ability to generalize to new data.
It seems odd that a separate test set isn't enough, but it's true. Training a machine isn't as simple as it looks. Anyway, the next chapter circles back to ensemble learning with a more detailed look at bagging and boosting. (Machine learning has such creative names for things, doesn't it?) I'll leave the explanations to the book and get on with the review, so the next chapter works through an extended example application to do sentiment analysis of IMDb movie reviews. It's kind of a neat trick, and it uses everything we've learned so far together in one model instead of piecemeal with little stub examples. Chapter 9 continues the example with a little web application for submitting new reviews to the model we trained in the previous chapter. The trained model will predict whether the submitted review is positive or negative. This chapter felt a bit out of place, but it was fine for showing how to use a model in a (semi-)real application.

Chapter 10 covers regression analysis in more depth with single and multiple linear and nonlinear regression. Some of this stuff has been seen in previous chapters, and indeed, the cross-referencing starts to get a bit annoying at this point. Every single time a topic comes up that's covered somewhere else, it gets a reference with the full section name attached. I'm not sure how I feel about this in general. It's nice to be reminded of things that you've read about hundreds of pages back and I've read books that are more confusing for not having done enough of this linking, but it does get tedious when the immediately preceding sections are referenced repeatedly. The next chapter is similar with a deeper look at unsupervised clustering algorithms. The new k-means algorithm is introduced, but it's compared against algorithms covered in chapter 3. This chapter also covers how we can decide if the number of clusters chosen is appropriate for the data, something that's not so easy for high-dimensional data.

Now that we're two-thirds of the way through the book, we come to the elephant in the machine learning room, the multilayer artificial neural network. These networks are built up from perceptrons with various activation functions:
However, logistic activation functions can be problematic if we have highly negative input since the output of the sigmoid function would be close to zero in this case. If the sigmoid function returns output that are close to zero, the neural network would learn very slowly and it becomes more likely that it gets trapped in the local minima during training. This is why people often prefer a hyperbolic tangent as an activation function in hidden layers.
And they're trained with various types of back-propagation. Chapter 12 shows how to implement neural networks from scratch, and chapter 13 shows how to do it with TensorFlow, where the network can end up running on the graphics card supercomputer inside your PC. Since TensorFlow is a complex beast, chapter 14 gets into the nitty gritty details of what all the pieces of code do for implementation of the handwritten digit identifier we saw in the last chapter. This is all very cool stuff, and after learning a bit about how to do the CUDA programming that's behind this library with CUDA by Example, I have a decent appreciation for what Google has done with making it as flexible, performant, and user-friendly as they can. It's not simple by any means, but it's as complex as it needs to be. Probably.

The last two chapters look at two more types of neural networks: the deep convolutional neural network (CNN) and the recurrent neural network (RNN). The CNN does the same hand-written digit classification as before, but of course does it better. The RNN is a network that's used for sequential and time-series data, and in this case, it was used in two examples. The first example was another implementation of the sentiment analyzer for IMDb movie reviews, and it ended up performing similarly to the regression classifier that we used back in chapter 8. The second example was for how to train an RNN with Shakespeare's Hamlet to generate similar text. It sounds cool, but frankly, it was pretty disappointing for the last example of the most complicated network in a machine learning book. It generated mostly garbage and was just a let-down at the end of the book.

Even though this book had a few issues, like tedious code duplication and explanations in places, the annoying cross-referencing, and the out-of-place chapter 9, it was a solid book on machine learning. I got a ton out of going through the implementations of each of the machine learning algorithms, and wherever the topics started to stray into more in-depth material, the authors provided references to the papers and textbooks that contained the necessary details. Python Machine Learning is a solid introductory text on the fundamental machine learning algorithms, both in how they work mathematically how they're implemented in Python, and how to use them with scikit-learn and TensorFlow.


Of these two books, Data Smart is a definite-read if you're at all interested in data science. It does a great job of showing how the basic data analysis algorithms work using the surprisingly effect method of laying out all of the calculations in spreadsheets, and doing it with good humor. Python Machine Learning is also worth a look if you want to delve into machine learning models, see how they would be implemented in Python, and learn how to use those same models effectively with scikit-learn and TensorFlow. It may not be the best book on the topic, but it's a solid entry and covers quite a lot of material thoroughly. I was happy with how it rounded out my knowledge of machine learning.

Shattered Apollo (XCOM Files)

PFC Tom Shaw, March 1st
They told us we were the first humans to kill a creature from outer space. They told us we were heroes. They told us we were the best humanity has to offer to fight off this invasion. I'm going to be honest with you here and tell you, I don't really think any of that is true. These guys knew the aliens were coming. They must have known for a while. The XCOM project had been dormant for years before they came and picked us up and told us we were chosen to protect humanity. It's possible no one had seen or fought an alien before us, but then how did they know what to expect? How did they know when to expect it? We certainly weren't heroes. Most of us were just dumb kids who knew how to shoot. Yeah, we were trained to kill each other, but not one among us had been trained to repel alien invaders with death rays. Hell, I don't even really know what plasma is and I've been covered in the stuff.

You're here to talk about that night, right? I relive that night a lot in my nightmares. It's difficult to talk about for a lot of reasons, but as the squad leader of the first human beings to engage and defeat an alien threat - well I'm getting used to being asked about it. Well, here goes.

We were flown from the Cheyenne Mountain complex to Vancouver late on the night of March 1st. Aliens had touched down at a shipping warehouse and were in the process of abducting any humans unfortunate enough to be out that late. None of this junk sounded real to me, by the way. Here I was leading this team and I wasn't even convinced we were going to be fighting what they said we'd be fighting. I'd never seen an alien or a UFO. This was the stuff of TV shows and silly documentaries on conspiracy theories. How could this crap be real? It felt like a dream flying out to Vancouver that night. It felt like a dream until our boots hit the ground.

The Landing Zone
We dropped down on the street outside the parking lot of the warehouse. The lot itself was fenced in with a stone wall creating a bit of a fortress for these aliens to hide in. We could hear some damn strange noises coming from beyond that wall. That's when most of us knew that this was really happening. You can be dropped into a foreign country where you don't speak a word of the local language, but you know those sounds coming from the other side of the wall are human voices speaking human words that you just don't understand. This was not like that. No, sir. I can't even begin to describe these sounds to you. They were like nothing I'd ever heard on Earth. This was really happening.

First Contact
Grace was the first to lay eyes on an alien - Private Grace Russell, my fellow American that night. As she took up a position against the wall and moved forward to the entrance of the lot, she spotted three little greys working on one of their abduction pods. I guess they store humans inside these things for transportation. The science team understood more about that than I ever did, but we just called them abduction pods. Anyway, these aliens saw Grace and took up defensive positions behind the pod and some nearby cars. My Brazilian brother, Julio "Burrito" Brito, took up a position across from Russell at the entrance to the lot.


The Great Kobayashi Grenade
I couldn't see a damn thing from where I was pressed up against the wall, but the next thing I know I'm hearing the bizarre sizzling sounds of these plasma pistols firing on my team. Russell and Brito open fire, but they're basically exchanging rounds with the aliens shooting their green ooze back at us. That's when Shinji Kobayashi - no one even knew this dude until that night. This guy really kept to himself at the base. He barely spoke a word of English to anyone. He was definitely a loner. So this guy, Kobayashi, decided to sneak up along the outside of the wall and toss a grenade over the top on to the aliens' positions. The crack of his anti-personnel grenade marked a stop to the plasma pistols sizzling shots, but Russell could see two were wounded, but none were killed. Burrito and I slipped into the parking lot in this short window of opportunity.

Man, the first time I saw a grey - hunkered down behind that abduction pod, staring down at the shrapnel out of its body - I just fired on the thing. I ended its life. That thing didn't even see me sweep in from around the corner. Yeah, as far as anyone can tell me, I'm the first guy on Earth to take one down. I barely even got a good look at the thing before putting a hail of bullets into its small grey body. There was a certain exhilaration among the team knowing that our simple ballistic weapons had defeated these technologically superior beings with futuristic, space rayguns. Sadly this small moment of victory was diminished by the sounds of heavy plasma fire coming from further down the street.

Kobayashi Comes Under Fire
Private Kobayashi's bold maneuver had left him alone and exposed. He was pinned against the wall farther up the road and barely holding back four greys who were trying to gain a strategic position behind our squad. Knowing this, Burrito rushed across the parking lot toward the warehouse hoping to end our conflict inside the compound swiftly. The aliens were wounded and distracted by the loss of one of their own. They didn't even see him get in close and mow down a second alien hiding behind a car in shock. Grace had only reported three aliens in the lot, so I felt confident that Private Brito and I could pincer the last one on our own. I sent Russell, Rojas and Marin to backup Kobayashi on the street. You know, I think about this moment often and wonder if splitting up the squad had been a mistake. That might have been where things went truly wrong for me and Marin, but if I hadn't sent them, then Kobayashi would probably have died in the streets of Vancouver that night.

Burrito Gets the Drop on This Alien
As Julio and I pushed forward in the parking lot searching for that final alien, Russell, Rojas and Marin made their way up the street toward Private Kobayashi. We heard Marin nscream out in pain from our position and it still sends chills down my spine. Julio and I thought she was dead. As far as we understood, no one had ever been hit by these death-rays so we expected the worst. Rojas came over the radio, though, saying she'd been hit but she was still alive. She even managed to take one of the aliens down before falling back behind a car to rest. Adriana Marin was tough.

As far as I understand, while the aliens were distracted by Private Marin, Kobayashi was able to take up a new position across the street - rushing away from the wall where he had been pinned down. From there he was able to take down an alien firing on Marin and Rojas with ease. Although Marin was hurt, it sounded to us like the firefight on the street was turning in our favor. We could hear the aliens shrieking their horrible sounds and scattering back to defensive positions further down the lot. Private Brito and I obviously wanted to pin the aliens down, but before we could rejoin Kobayashi we had to take care of our immediate enemy. We found the final alien of the initial squad hiding behind a yellow car. I took some shots that missed, which to this day still haunt me. That damn yellow car is one of the last things I remember that night. After that, things go dark.

Just Before Things Go Dark
The alien that Julio and I were tracking was leading us into an ambush. Julio told me later while I was in the med-bay that three aliens popped out of the warehouse itself right on top of my position. One of them fired several shots into my left side, nearly covering me in that burning green plasma. I went down hard and Julio thought I was dead right then and there. I don't have any memory of this, you know? The last thing I can remember is missing that damn bastard who led me into the trap. I guess after I fell, Brito rushed up taking shots on my attacker and killing it. He said I was bleeding out right there in the lot. He reported over the radio that if they couldn't get me on the skyranger soon, and rush me to medical attention I was a goner.

Kobayashi Coming in Hot
Now, from what I understand, once Burrito reported I'd been shot down, Kobayashi took charge of the team on his side of the wall. To this day, I never heard the guy speak, but if you hear Grace tell it, without Kobayashi's leadership I wouldn't be here today. She makes it sound like Shinji single-handedly killed the rest of the aliens in some kind of maddened rage, which makes Julio laugh every time we bring it up. All he would tell me is that Shinji led his sub-squad around the northern end of the wall and closed in behind the ambush in a pincer attack with Brito. Together, their counter-ambush wiped out the rest of the greys on site and we were able to be extracted soon afterward.

That's really all their is to tell. The six of us took out ten greys. Marin was wounded, and I was rendered unconscious. Technically, I was leading the mission and I got the first kill so some people think I'm a hero. Personally, I know it could have gone better. I'm still kicking myself for walking into that trap like a goddamn puppet on a string. It was my leadership that got Marin hurt, too. Kobayashi was the real hero that night as far as I'm concerned and I don't think I'm alone in that regard.


  • From an interview with Tom Shaw, US Special Forces, Leader on Operation Shattered Apollo



XCOM Report - March 1, 2015 - "Shattered Apollo" 

PFC Tom Shaw (USA) - Squad Leader
  • Confirmed Kills: 1 (Sectoid)
  • Condition:  Gravely Wounded
  • Earned Promotion 

PFC Grace Russell (USA) 

  • Confirmed Kills: 1 (Sectoid)
  • Earned Promotion 

PFC Roman Rojas (Guatemala)
  • Confirmed Kill: 2 (Sectoid)
  • Earned Promotion 

PFC Adriana Marin (Moldova)
  • Confirmed Kill: 1 (Sectoid)
  • Condition:  Minor Wounds
  • Earned Promotion 

PFC Shinji Kobayashi (Japan)
  • Confirmed Kill: 3 (Sectoid)
  • Earned Promotion 

PFC Julio Brito (Brazil)
  • Confirmed Kill: 2 (Sectoid)
  • Earned Promotion 

23 Mar 2020

CLAW


Back in 1997, Claw from Monolith Productions (Blood, F.E.A.R., Shadow of Mordor) was something of a rarity; a PC exclusive platformer. And a good one at that. This tough-as-nails 2D side scroller proved that home computers could play host to such a game without playing second fiddle to the genre's prolific nature on consoles.

Read more »

19 Mar 2020

HOTT 52 - Relearning The Rules For The First Two Weeks.

I played two games of HOTT (Hordes of the Things) this weekend, to do my first two weeks' games for the HOTT 52 challenge. I'm now caught up so that I can do a "game a week" - I've got things set up so I can quickly generate a battlefield and two opposing armies, based on my Etinerra campaign world. Currently, I only have humans, orcs and goblins. I guess I might now have some motivation to get some elves, halflings and chaos humans!

So what happened in these battles?




The human army watches nervously as the orcs march over the plains grasslands towards them. The humans are set up to defend their encampment. The orcs have brought a mountain ogre with them, truly a fearsome behemoth! The humans have a flock of Giant Ravens, which they immediately set loose into the air!



The armies slowly approach each other. The orc and goblin archers quickly shoot down the Giant Ravens that the humans sent to their right flank. Knights of the Duchy make to follow.



With a bone-shattering roar, the mountain ogre charges the humans commander and knights on their right flank! The knights are able to withstand the charge and flank the beast, dispatching it! The orcs and goblins howl in dismay!



The orcs seek to close to combat, with their left side spear turning to face the flank attack by the human's commander and knights! On their right side, the orc and goblin archers rain arrows on the approaching human knights.

 

The bestials join in combat against the human spearmen center! While the knights seek to press their advantage to the orc's left, on the right, the situation grows more dire for the knights, who are falling to the orc's and goblin's deadly missile fire and skirmishing attacks.

 

The bestials press their successful attacks to the human's left, having defeated the knights and then the militia bowmen! The lines of combat dissolve into chaos, but the human spear and commander's knights are too much and the orcs lose their warchief and half of their forces!

Their attack blunted, the bestials sound the horns of retreat and melt away into the plains, leaving the humans to regroup and count their losses.

I set up this game to be simple with no terrain, so that I could focus on remembering the rules. There were a few things I had forgotten and needed to remember in playing HOTT, such as Knights pursue if they destroy their enemies or force them to recoil. I also had to remember that if a stand is in support of another stand, and the front stand is destroyed, the supporting stand is lost as well! 

There was a lot of pushing and shoving in the center, it was the action on the flanks that made all the difference!

I've created a different version of my HOTT reference sheet for my use. If you're interested, it's here: https://docs.google.com/document/d/13-9aZ1NurA6ZzRK4_Bj04bmNYYMirzewK1G_uZZ2SmU/edit?usp=sharing

This document is geared towards my campaign world and the forces I would normally use. I put all of the HOTT units on the last page, in case you want to use that instead.




The orcs were on the move again and threatening the human's castle. The human commander assembled her forces as best as she could, given the woods to the right of the castle and the marshland to the front. She sent in her doughty (and apparently invisible! [grin]) warbands into the swamp to hold the center.

 

The orcs sent their riders around the woods to wait for the right opportunity to press the attack, or disrupt the human's defense by being a possible threat. The human's command and knights took watch, leaving the rest of the troops to defend the line. The orc line approached.



The humans held steady, raining missile fire on to the orcs flanks, consisting of their heavy blades, while the goblin warbands approached the marshes.

Then, in a complete surprise, when the orc line rushed to attack, the unit of blades that the orc warchief was in suffered a grievous defeat! The bestials were dismayed and the attack faltered as they retreated.

I had also forgotten that in HOTT 1.2, if you kill the opposite side's general and they've lost more AP than you have, then you win! When the orcs roll a 1 and the humans roll a 6, bad things will happen. It was a quick kill, but a fun game. I am going to replay this same scenario and even perhaps the same strategies and see how the battle turns out differently... keeping my generals safe however!

I also realize how silly my empty warband stands look, so today, I made an order with Splintered Light for their Late Saxon Fyrd set - twelve 15mm figures. This will give me four human warband stands, more than enough for future battles against the bestial armies!



Question for you, my loyal henchfolk, if you've made it this far. Do you like this style of recap - where I set it as if it were a journal of the battle?

I enjoy reading and writing this style of recap, but I know that recaps can be hard for many to enjoy. To me, thinking about the battle in terms of how the campaign world would see it and record it into history is interesting!

Considering A Master's Or PhD In Digital Media?



The Digital Media program at Georgia Tech is now accepting applications at the Master's and Ph.D. levels. 

The Digital Media graduate program at Georgia Tech is a multidisciplinary program that engages students in making with meaning in digital media through their own discipline, skills, and expertise. Students here from the humanities, engineering, technology, and the arts backgrounds all engage in collaborative, practice-based work where they learn and apply design methods and critical theory in studio courses that are focused on having a voice--or giving a voice to others--through digital media.  

They offer both a two year intensive Master's degree and a Ph.D. in Digital Media, working with leading researchers that touch on topics such as civic media, game design, smart cities, interactive installation, augmented & virtual reality, computational creativity, and STEAM-based education.

They host multiple online events to inform those interested in the program. More information and RSVP is available through our website: http://dm.lmc.gatech.edu/.  The upcoming application deadlines for Fall 2019 are Dec. 10th, 2018, for the Ph.D. program and Jan. 8th, 2019 for the Master's program. 

Students interested in visiting the campus can do so during our open house event on January 18, 2019.  RSVP here.

If you have any further questions about the program and admission process, please contact me or the Associate Director Michael Terrell directly at dgs@lmc.gatech.edu.

[Hackaday] Do You Smell What The Magic Chef Is Cookin’?

Do You Smell What the Magic Chef Is Cookin’?

16 Mar 2020

Brainstorming With Factoring

In the last post I described how I sometimes describe a problem with a matrix, and then look at the matrix transpose to see if it gives me new ideas. Another technique I use is to look for a factoring.

In algebra, factoring transforms a polynomial like 5x² + 8x - 21 into (x + 3)·(5x - 7). To solve 5x² + 8x - 21 = 0, we can first factor into (x + 3)·(5x - 7) = 0. Then we say that x + 3 = 0 or 5x - 7 = 0. Factoring turns a problem into several easier problems.

x 3
5x 5x² 15x
-7 -7x -21

Let's look at an example: I have six classes, File, EncryptedFile, GzipFile, EncryptedGzipFile, BzipFile, EncryptedBzipFile. I can factor these into a matrix:

Uncompressed Gzip Bzip
Unencrypted File Gzip(File) Bzip(File)
Encrypted Encrypt(File) Encrypt(Gzip(File)) Encrypt(Bzip(File))

Using the Decorator pattern (or mixins), I've turned six different types of files into four components: plain, gzip, bzip, encrypt. This doesn't seem like much savings, but if I add more variations, the savings will add up. Factoring turns O(M*N) components into O(M+N) components.

Another example comes up when people ask me things like "how do you write linear interpolation in C#?" There are a lot of potential tutorials I could write:

C++ Python Java C# Javascript Rust Idris
Interpolation
Neighbors
Pathfinding
Distances
River maps
Isometric
Voronoi
Transforms

If there are M topics and N languages, I could write M*N tutorials. However, that's a lot of work. Instead, I write a tutorial about interpolation, someone else writes a tutorial about C#, and then the reader combines knowledge of C# with knowledge about interpolation to write the C# version of interpolation.

Like transpose, factoring only helps sometimes, but when it applies, it can be quite useful.

15 Mar 2020

IBM PCjr. Upgrades Part 2

When I first received my IBM PCjr. back in 2013, I was able to discuss most of the readily-available upgrades for the system that existed at that time.  https://nerdlypleasures.blogspot.com/2014/03/ibm-pcjr-upgrades.html  Now, almost six years later, we have some new upgrades available.  Let's see what modern conveniences can do for a 35-year old computer system


Read more »

5 Mar 2020

Storium Theory: Inverting The Trope

We've seen it before.

A young hero has an older mentor, who taught the hero everything the hero knows. The mentor takes on a mission, and is captured, or killed, or goes missing, or what-have-you. Now the hero has to step up and save the day.

It's a trope.

It's a trope for a reason. It's a pretty powerful story. There's a personal connection between the hero and the mission - a need to carry on after a person the hero respects, perhaps, or redeem the person's reputation, or even rescue the person. It ties the hero more deeply to the tale than if the hero had simply taken the mission himself in the first place.

There's nothing particularly wrong with tropes, even with tropes that are used extremely often. Frequently, tropes are tropes because they are powerful and beneficial to stories. They give additional emotional impact. They create interesting character types. They give us connections to stories.

But for all those reasons, they can also be extremely powerful when inverted.

Consider the above trope. And consider these others:
  • The combatant has to save the non-combatant.
  • The parent has to rescue their young child.
  • The lawyer has to figure out the conspiracy entrapping their client.
  • The detective has to discover the secrets of the corrupt corporation.
You've seen all these stories. And oftentimes, they're good stories. There's nothing inherently wrong with using these tropes - they can lead to gripping, emotionally affecting tales.

But let's look at taking each of the tropes I've mentioned and turning them around:
  • The older mentor's successor takes a mission and is captured/killed or goes missing, and the mentor must now take the mission in his place.
  • The non-combatant has to somehow rescue the combatant.
  • The young child must figure out how to rescue their parent.
  • The client must figure out a conspiracy that has even enveloped their lawyer.
  • The corporation is being menaced by a corrupt detective, and an employee must figure out how to clear its name.
These sound interesting, don't they? In some cases, they give us natural questions that are inherently intriguing. Take the "non-combatant has to rescue the combatant" one...if the combatant, i.e. someone trained in battle, is in trouble...it's going to be extremely dangerous for a non-combatant, i.e. someone not trained in battle, to come to the rescue. We'll wonder how this person is possibly going to accomplish their goal against such odds.

And sometimes, they're interesting just because they play with our usual sympathies. In a battle between a corporation and a detective, we're pretty hardwired to sympathize with the detective - large organizations are generally things we mistrust instinctively. If one's being investigated, there's always a background thought of "well, there's probably something going on there, right?" So if a story plays with that, and has the corporation innocent and the detective corrupt, it twists our sympathies around.

Sometimes, these inverted tropes can become so popular that they then become tropes themselves (I'm sure that you've seen at least some examples of each of the "inverted" stories I mentioned, too). But the point stands: When you find yourself thinking about using a trope, consider for a moment how you might invert it. Sometimes, an inversion of a trope can be just as powerful, or more powerful than the trope itself.

When you're creating a story concept, or a character concept, tropes are going to come into play. You'll find yourself slotting characters into recognized boxes, consciously or unconsciously. And that's fine. But take a little time to think about what you might be able to do if you turn the trope on its head instead. Maybe it won't fit your story, or maybe it won't give you the ideas you need...if so, that's fine. Write your story the way you write your story. But maybe, just maybe, an inverted trope will give you some inspiring story or character ideas, something that excites you and will excite your fellow players and readers.

So take some time. Look at the tropes you find yourself using, and think about how to invert them. When you walk a well-trodden path, look for the points where you can step off or make it lead to a different destination. You can get some excellent stories from tropes...but you can get some excellent stories by twisting them around, too.

The Seahorse Trainer, Short Film, Review And Interview


The Seahorse Trainer is a relatable tale for everyone who has pushed or is pushing to accomplish their masterpiece. Here are wonderful visuals of a fantastical setting that is charming and carries a deeper meaning of how many of us are the reason we hold back from making that great accomplishment.

The Seahorse Trainer was screened at the 2019 FilmQuest film festival (website). It was nominated for Best Fantasy Short. It won Best Visual Effects for a short film.

I recommend The Seahorse Trainer as a family friendly film everyone can enjoy.

Synopsis: Seamour is a lonely old man with a passion for training seahorses. Desperate to have his most prized seahorse perform an ambitious trick, the final day of training has come and he must see the stunt come to life. But when the hourglass turns, Seamour realizes he needs to overcome something from his troubled past to achieve his magnum opus.

Babak Bina and Ricardo Bonisoli, co-writers and co-directors of The Seahorse Trainer, were kind enough to talk about their film and other work they are doing. They also talk some about themselves and what inspires them.

What was the inspiration for The Seahorse Trainer?

Babak: The sparks of the idea came from Ricardo's fascination with sea life and our love for the eccentric. Ricardo had the idea of a mockumentary where we would go and interview a lonely man who is training seahorses in his decrepit apartment. That's where it all started, but then through many brainstorming sessions that idea has changed into a narrative short. We wanted to make a film we would have liked to watch if someone else created it. In a world over saturated with superheroes there needed to be something off-beat, odd and symbolic, in other words, a lonely old man with a bizarre obsession!

What project(s) do you have coming up you're excited about?

We are cooking up a couple of different ideas, one is an experimental music video for an emerging artist. Another idea Ricardo is cooking up is for a short film called The Ostrich. Babak is also in the writing stages of a story revolving around, a tree and a man gone missing. A story exploring the psychological process of individuation.
 

What was your early inspiration for pursuing a career in film?

We both started working in the film industry as visual effects artists. Film has always been a passion for us. As fun as it is to be a part of making a high budget blockbuster film, which helped us polish our craft, we find much satisfaction in telling our own stories.

What would be your dream project?

Our hope is to continue on creating worlds that create opportunities for exploration. New stories, new ideas. Fresh and unique are keywords for us. As visual effects artists working day jobs, we are helping to create blockbuster superhero films, and let's put it this way, that is the last kind of film we are interested in making as our personal project. There must be enough people in this world interested in seeing something a little different.
 

What are some of your favorite pastimes when not working on a movie?

Babak: Sculpting, reading, drawing, playing drums and going for long walks.

Ricardo: Illustration, jamming, traveling.

What is one of your favorite movies and why?

Ricardo: I was moved when I first saw The Wall, I realized a film can be very experimental and still being able to express a powerful feeling. I also love Pink Floyd's music.

Babak: David Lynch's Lost Highway was a film that got me into watching challenging films that require effort and multiple watches to decode. Lynch is an absolute master at creating such worlds and I love getting lost in them.

You can find out more about The Seahorse Trainer on IMDb (link).

I'm working at keeping my material free of subscription charges by supplementing costs by being an Amazon Associate and having advertising appear. I earn a fee when people make purchases of qualified products from Amazon when they enter the site from a link on Guild Master Gaming and when people click on an ad. If you do either, thank you.

If you have a comment, suggestion, or critique please leave a comment here or send an email to guildmastergaming@gmail.com.

I have articles being published by others and you can find most of them on Guild Master Gaming on Facebookand Twitter(@GuildMstrGmng).

 

Tech Book Face Off: Breaking Windows Vs. Showstopper!

For this Tech Book Face Off, I felt like expanding my horizons a bit. Instead of reading about programming languages or software development or computer science and engineering, I thought I would take a look at some computer history from the business perspective. There are plenty of reading options out there in this space, but I settled on a couple of books about Microsoft. The first, Breaking Windows: How Bill Gates Fumbled the Future of Microsoft by David Bank, is about Bill Gate's hardball business tactics that won him a monopoly in the PC desktop market, but then nearly destroyed the company in that fateful confrontation with the US Justice Department and caused him to miss the Internet and, later, the mobile revolution. The second, Showstopper! The Breakneck Race to Create Windows NT and the Next Generation at Microsoft by G. Pascal Zachary, has an even longer subtitle that neatly describes the book on its own. Both of these books were written quite a while ago, so let's see how their stories hold up today.

Breaking Windows front coverVS.Showstopper! front cover

Breaking Windows


The narrative starts out with the backstory of how Gates came into his PC desktop monopoly by realizing that software—specifically the computer's operating system—would be an important and valuable part of the PC ecosystem. As PC hardware got cheaper and more prevalent, the software volumes would grow with the spread of the hardware, and at essentially zero marginal cost to Microsoft. All they needed to do was become the defacto standard OS. That's what Gates set out to do, and he succeeded with Windows 3.1 and then Windows 95. The bulk of the story takes place after Microsoft had achieved its monopoly and was deciding on strategies to defend it.

One of the main strategies was to identify competitors that were creating software that was somewhat tangential to Windows or could be added as a compelling feature, and whose software was becoming popular enough to potentially pose a threat to Windows by becoming a new platform. Microsoft would then create their own version of that software and integrate it into Windows or otherwise absorb the other company's software, nullifying the threat to their monopoly.

The most prominent example of this absorption strategy came with Internet Explorer and the browser wars between Microsoft and Netscape. Netscape Navigator started out with nearly the entire market of the World Wide Web before Microsoft got into the browser business. By the time Microsoft had revved up to IE 3.0, they had claimed a significant amount of market share from Netscape, and because of bundling IE with Windows and offering it for free to older versions of Windows, Netscape was doomed to lose in the long (or not-so-long) run.

Everything was not all peaches and cream within Microsoft, though. There were two warring camps fighting for the soul of Microsoft. On one side was the Windows team led by Jim Allchin that was developing the next big thing: Windows NT. On the other side was the Internet Platform and Tools Division led by Brad Silverberg that wanted to leave Windows behind and try to capture as much of this new Internet frontier as possible, using IE as the platform. Gates would end up siding with Allchin and IE became a part of the Windows platform instead of growing into one of its own.

It's almost comical seeing some of these disagreements today. One of the most important features of the IE platform that was integrated into Windows as an option was Active Desktop, but this feature seems so inconsequential today. Making the desktop background a web page was fraught with problems, and all that has survived is a way to enable single-click icons instead of the usual double-click to run a program. I don't think hardly anyone used it, especially after dealing with multiple desktop crashes. I remember it being a novelty for a while, but I soon stopped enabling it because it was so annoying and a double-click is so ingrained in my desktop usage.

Of course, the disagreement with the Justice Department over Microsoft's monopoly was not so insignificant. Part of the reason their tactics got them into trouble was because IE was offered as a free upgrade for older versions of Windows that didn't have it or had older versions of IE. If Microsoft had truly made IE an integrated part of Windows and only released new versions of it with new versions of Windows, Microsoft's competitors wouldn't have had as strong of a case. Microsoft wouldn't have had as strong of a monopoly, either, because IE was getting new versions much faster than Windows was and people that didn't upgrade Windows were still getting free upgrades of IE.

Even so, the government's eventual breakup proposal was preposterous. They wanted to force Microsoft to set prices for Windows versions with and without IE based on how many bytes each version was, like it was produce or meat or something. The government obviously had no understanding of what software really was, no idea how ridiculous that sounded, or what a good solution to the real problems of Microsoft's monopoly would actually look like. In the end that proposal was dropped, and the entire court case seemed to have done nothing more than give Microsoft a ton of bad press.

In the mean time Gates had done plenty of other damage to Microsoft and Windows because of deciding to pursue these retrenchment strategies with the browser and other things related to the Internet. Bank makes the case that Gates should have pursued the Internet platform strategy in order to disrupt his own business and grab the larger market that was just coming to light, but I'm not so sure that would have worked, either. If he had done that, would he have been able to beat Google before they rose to the top, or would he have been able to foresee the coming of mobile and the smartphone before Apple took over with the iPhone? It's hard to imagine Microsoft getting all of that right and still being top dog today. (Although they're now doing quite well now under Satya Nadella.)

There was so much more in this book, like the section on how XML came to be. (Of course bloated, complicated XML was created at Microsoft. In the book it was portrayed as a genius innovation by Adam Bosworth that would help Microsoft take over Internet data flows in spite of Gate's decisions. I'm so glad JSON has stopped that nonsense.) I could keep going, but it's all in the book. It was a wonderful trip down memory lane, covering plenty of things I had forgotten about that were a big deal at the time (remember the AOL shortcut bundled on the Windows Desktop). The book is decently written, if a bit confusing at times. Bank jumps around a lot, and there's no overarching timeline to the narrative. Regardless, it gives great insights into what was happening at Microsoft through all of the turmoil in its history and is well worth the quick read.

Showstopper!


As the subtitle describes, Showstopper! is the story of how the first version of Windows NT was conceived and built. It makes for quite an engaging story, as the NT team was arranged within Microsoft in a unique way for the company. Instead of being a department that reported to and was directly overseen by Bill Gates, the team was more of a startup company within Microsoft that operated fairly independently and was left more or less to its own devices. Gates did check in and imposed some of his own requirements from time to time, but not anything like other departments within Microsoft.

One of the main reasons for this independence was the force of nature that was Dave Cutler, the chief architect and director of Windows NT. Cutler was aggressive and expected incredible things from his team, and he did not get along well with Gates, either. Gates had hired him when Cutler had left Digital Equipment Corp. and respected and trusted him enough to let Cutler run things as he saw fit, so Gates pretty much left him alone.

Cutler had brought along a number of programmers from his team at Digital to be the core of the NT team, and as he took on more Microsoft employees to build out the team, a rivalry emerged between the two groups:
The Digital defectors also were more methodical about their jobs, hewing to textbook engineering practices in contrast to the Microsofties, who often approached a problem helter-skelter. Cutler's people took work seriously, while Microsofties sometimes tossed nerf balls in the hallways or strummed guitars in their offices. The differences in style were apparent to Cutler's people, who derisively referred to Microsoft as "Microslop." By the same token, Microsofties were put off by the clannishness of Cutler's gang.
Regardless of these divisions, work got done and NT progressed through big scope changes and constant feature creep. Throughout the project Cutler never really trusted or approved of the graphics team. He had always been a terminal kind of guy and didn't see the need for a GUI, and he did not agree with the graphics team's much more laid back approach to software development. The graphics team was dealing with their own internal issues as well, having chosen a new, immature programming language to write the GUI: C++. While it was a new language at the time and the supporting tools were rough and unstable, G. Pascal Zachary's assessment of the language seems a little off:
While it was portable, however, C was difficult to master and gave a programmer a great deal of latitude, which increased the likelihood of coding errors. A more inspired choice—a gambler's choice—was C++, a newer language that was all the rage among software theorists. By preventing code writers from making mistakes, C++ promised faster results and greater consistency, which would benefit programs that were the work of many people.
C++ is hardly easier to master than C! With C++ being a superset of C, C is most certainly the simpler language. While it may be true that C++ can support larger projects, it is also quite easy to make C++ programs much more complicated than C. These kinds of off-the-cuff assessments were fairly common in the book, and they made it seem like Zachary was either over-simplifying things or he didn't fully appreciate the technical aspects of these topics. This tendency to over-simplify was especially apparent whenever he was discussing features of NT. The discussions nearly always dealt in generalities, and it was difficult to figure out which features, exactly, he was talking about. He would mention that features were missing from NT or that programmers were adding features on their own whims without specifying what those features actually were. Not knowing what he was referring to became quite frustrating at times.

Even with the occasional vagueness, other discussions were satisfyingly to the point, like whenever the client-server architecture of NT came up:
Time and again, Cutler had hoped to dispel doubts about client-server. In his design, the kernel code treated the entire graphical portion of the operating system, including the Windows personality, as an application. It was a classic design choice. Client-server ensured reliability but degraded performance. It was probably Cutler's most momentous decision.
The performance hit incurred with the client-server model was a constant issue during the development of NT, and it wasn't until near the end of the project, and after a year delay, that the performance was brought under control and near parity with Windows 3.1. The story of how Cutler's team achieved the necessary performance while fixing the innumerable bugs as NT came closer and closer to release was one of the best threads of the book.

The book is also riddled with pieces of advice on software development, most often in the form of little narratives about different aspects of the project and a vast array of the programmers and testers that worked on it. Things like adding programmers to a late project makes it later, working longer hours is counterproductive, first make it right then make it fast, the number of bugs in a system is unknowable, and automated testing and stress tests improve code quality all appeared at various points in the story. It was enjoyable to see all of these hard-won nuggets of wisdom come up and be acknowledged during the course of such a high-profile project.

Sometimes the words of wisdom were quite humorous, too. At one point Cutler had written an email that included this gem: "If you don't put [bugs] in, you don't have to find them and take them out!" Well, yes, that's great. If only it were that easy! Of course he was trying to encourage his programmers to be more diligent and rigorous, but what a way to say it.

Throughout the book, new people were continuously introduced, each with their own mini-narratives told within the larger context of the NT project. It was nice to learn about so many different people that had a hand in the project, and there were dozens of stories related of the approximately 250 people that helped NT over the finish line, but it became exhausting to keep track of everyone as the names kept piling on. The number of people became pretty overwhelming even though only a small fraction of them made it into the book.

The scope and accomplishment that is Windows NT is quite astounding. Nothing like it had ever been done before, and the scale of the project was beyond anything achieved in software development up to that point. The scale of development wouldn't be surpassed until Windows 2000, seven years later. Even with the rough edges and occasional frustrations, the story of how NT was built was a fascinating and entertaining read. I would definitely recommend giving it a read if you're at all interested in how Microsoft managed to revolutionize its Windows operating system.

Face Book