Home > Articles

📄 Contents

  1. Chapter 3: Managing Your Story On-the-Fly
  2. Encouraging Actions That Lead to Conflict
  • Print
  • + Share This
This chapter is from the book

Encouraging Actions That Lead to Conflict

In Chapter 1, I briefly outlined Edwin Guthrie's psychological theory of motivation. Guthrie's theory breaks the process of object stimulation and subsequent resolution into the following steps:

  • A stimulus event upsets or excites the object, which is then driven to restore stasis.

  • The proposed response is compared with those that the object remembers using in similar circumstances.

  • Repetition of like responses leads to habit.

  • The initial event creates both direct excitement and secondary stimuli in the object.

  • All responses are motivated by the desire to remove the initial stimulus and return to stasis.

  • Sooner or later, something succeeds in removing the stimulus and the object returns to stasis.


It would be awfully narrow-minded to stop any investigation of the motivation of human action at behaviorism. We aren't really trying to make intelligent objects for our games; we are just trying to create reasonable simulations. For this, Guthrie's theory is a more than adequate launch pad. Although this list is based on Guthrie's, I have freely deviated from the original.

Inciting the Intelligent Agents

You could consider Guthrie's model an accidentally prepared roadmap for designing intelligent agents. That wasn't his objective, but it amuses me. It certainly falls outside the original intention of the theory of motivation, but because our work with plot is entirely dependent on simulating the actions of characters, this theory can help us to design agents that make choices that are logically motivated.

The object in Guthrie's model begins in a state of relaxation and contentment. The object is balanced. It is in stasis. Stasis is a trick of perception—the time interval before an audience is aware of actions and reactions. Stasis is the idling state of the object. It is a rest state where all motivations have been adequately addressed. Stasis is the habit phase, where behaviors are executed in order to prevent the stimulus from occurring.

The first event for the model can be compared to a theatrical and literary concept called the inciting incident. During an inciting incident, some event—either the product of internal motivations or the result of external stimuli—causes sufficient interruption to evoke a reaction from an intelligent agent. In order to irritate the intelligence, the stimuli must conflict with its motivational drives. The object will not regard all stimuli as negative influences. Some will be found desirable by the object.

In the next phase of our model, motivation combines with memory in the selection of action/reaction. Our virtual intelligence evaluates the event and searches for similar past experiences to plan a reaction. Once a similar or matching event-response is located, the intelligence reacts in the manner that is most likely to produce the desired outcome. If the attempt removes the irritant, the intelligence logs the success in memory and returns to stasis. If the attempt fails to remove the stimulus, the intelligence logs the failure and makes another selection. The object makes its choice of reaction based on prediction via pattern matching.


We won't be implementing memory in our game. I've simplified the paradigm in order to focus on the most substantial elements.

Actions that yield expected results are more likely to be selected in future cycles and may even be applied as a form of prevention or protection by adding them to the stasis routines. You could describe stasis routines as "habits" that the intelligence executes while idling.

Motivations drive all action and reactions. Without motive there is no action. Motivations can even generate actions and events within an individual intelligence and lead to conflict within that individual entity.

The actions of one intelligence appear as events to another intelligence. Actions from any intelligence lead to reactions from the other intelligences that come into contact with the original intelligence. This occurs both when the motivations of the two intelligences are in conflict with one another and when the violations occur between two objects whose motivations are cooperative.

In a sense, we might say that the behaviors of conflicting intelligences engage in negotiation with one another. For each intelligent agent the ultimate goal is to return to stasis.

The Art of Negotiation

Negotiation is the heart and soul of what we call conflict. There are infinite methods of negotiation. The most logical form of negotiation is almost never the most dramatic. Likewise, it is almost never the method chosen by individual intelligences as their preferred strategy.

In other words, it is logical for the entities to exchange a list of needs and desires and to negotiate in a manner that leads to consensus approval of the solution. Most often, however, the entities will abstract the needs and desires and negotiate in various methods, many of which are nonproductive in order to try to "win."

It is this desire to win the negotiation that prevents simple logic from serving its purpose. The desire to win must be factored in to the potential intelligence.

The intelligence logs the release of the incited stimuli and remembers negatively those actions that failed to resolve the conflict and positively those actions that succeeded. This is, of course, a generalization. The log is relative. An entity might log a value of positivity or, even better, a positive-to-negative ratio for the strategy. The more complex and comprehensive the memory, the more accurate the predictions will be when patterns are associated during the comparative phase of new events.

Conflict Resolution Strategies

Strategies are reusable behaviors with dynamic arguments. In other words, an entity may use the same strategy in many different situations while substituting any of the properties of the behavior, the objects that are influenced, and the concepts under negotiation.

In accordance with Abraham Maslow's hierarchy of human needs, the motivations essentially fall under the following classifications: physical safety, food and shelter, love and belonging, self-awareness and acceptance, and self-actualization. Intelligences tend to abstract these motivations. This action seems to deter awareness of the underlying motivations

Maslow suggests that intelligences are limited by a foundational relationship between these objectives. The objectives on the base layer of this hierarchy must be satisfied before the intelligence will seek any of the higher levels. In a related way, the intelligence will forfeit the upper levels if any lower level is threatened or removed. Our virtual intelligences then, should be guided by these motivations in this order (although if asked about the motivations behind an action, they would probably give a deceptive response or abstract the motivation in some way).

Example: Battle Ball

I think that the best way to really wrap your head around these ideas is to start messing around with a game example that puts the ideas to work.

Battle Ball is a weird little game proof I created to demonstrate the concepts of dynamic objects within a game. I've included the media for the game on the companion CD-ROM, along with the finished game. Over the next several pages, I'll introduce you to Battle Ball and describe the code I used to create the characters and effects.

Before we begin, open the file labeled BattleBall.dir found on the CD-ROM. Rewind and press Play.

After the splash screen, the primary game screen, shown in Figure 3.2, appears.

Figure 3.2 The primary game screen for Battle Ball.

The Game Screen

The game screen is divided into six elements:

  • The large central area is occupied by a 3D sprite.

  • Directly below this is a green power bar that will indicate the player's life force.

  • Below the life force indicator are two rich-text sprites. The top one, named "display," will display the messages that the player's character hears and says during the game.

  • The bottom sprite, named "battle," will display text that describes attacks and other nonverbal events that occur in the space.

  • These text areas serve to supplement the information that the user might otherwise glean from the 3D environment alone. In some cases, this has been done to optimize the movie; in other cases, it has simply been done to simplify the game.

  • To the left of the text areas is another text sprite that displays the facial reaction of the character that is speaking.

  • On the right side, the player's face is displayed. This provides visual cues for the player during game play.

The player's objective is simple: Free the trapped battle ball. Unfortunately, the deck seems stacked against him. Not only do the red battle balls pose an escalating threat, the blue battle balls are reticent to assist. Enough coaxing will eventually secure some help, in the form of magic, from several of the player's teammates.

Now stop the game and open the cast window using the thumbnail view. If you make your cast window 10 cells wide, you will get a peek at the overall structure of the media and code for the game (see Figure 3.3).

The Cast

The cast is easily divided into six categories: 3D media, scripts, text members, shapes, fonts, and sounds. Let's start with those at the end of the list, because you are probably already familiar with members of these types.

The sound files in the bottom row are used to create the background music and an occasional mouse response. If you played the game already, you may be wondering about this. The background music clearly changes tempo as the game advances levels. This is done with only one sound file, thanks to the new #rateshift property of Director's sound command. In the message window, type the following:


Press the Return key when you are finished typing. This starts the music. There is nothing out of the ordinary here. Now type the following command into the message window and press the Return key:

 sound(8).play([#member: member("bkLoop"), #rateshift:12])

This may be amusing, even exciting at first, but you'd be ready to slug me if I didn't mention here that you can stop this or any such sound simply by clicking the Stop button on the control pad in the toolbar. Stop, even when the movie isn't playing? Yes.

Figure 3.3 The cast for Battle Ball.

Moving on up the cast, sounds in the next row are named to match the emotional states—such as anger, crying, worried, hopeful, and happy— of the characters in the game. In other words, the names of the emotions that the characters express are the same as the names of these sounds. In this case, the characters share these sounds. You might like to modify these ideas and enhance your characters by giving them individual voices.

Above the sounds you will find the fonts used in the movie. I embed the fonts in order to ensure the player sees the same thing I do, regardless of the fonts available on her system.

The next row contains the shapes used to make the loading bar and life-status bar. Using shapes is much less expensive, in terms of download bandwidth, than using bitmaps. Here, I've taken advantage of one of the built-in textures to give my progress bar a more finished look.

The next two rows contain field and rich-text cast members. The fields store the data that the characters use to speak. The rich text is used to display this information to the player.

Moving up the cast, the next four rows contain scripts. Most of these scripts just handle routine business, such as looping on the frame or moving to a new frame when a mouseUp event is received.

We will focus most of our attention on the parent scripts (second row from the top). These scripts generate the characters' intelligence in the game as well as the narrative management object. The character object scripts handle the bulk of the spatial sphere of influence and psychological sphere of influence, whereas the narrative object script handles the narrative sphere of influence.

The last item (the top row) in the cast is a 3D cast member.

The 3D Cast Member

Inside the 3D world are 21 models: five planes that comprise the floor and walls of the arena, seven spheres and seven small planes that compose the battle balls, one spheroid cage hanging above the center of the battlefield, and the trapped battle ball inside the cage. During game play, we will add and remove a pair of particle fountains that represent the level of experience or magic that the player has attained.

The 3D cast member takes advantage of the userData property of models in order to assign properties to models at design time. This little-known feature allows developers to enter property and value pairs in the user-defined properties field from within a 3D modeling package and then access those properties from Lingo.

Think of it as a way to attach information about a character to the character without having to devise some complex scheme in Director. From within 3D Studio Max, simply right-click a model and choose Properties. The Properties dialog appears. Click the User Defined tab (see Figure 3.4).

Figure 3.4 The User Defined tab of the Properties dialog in 3D Studio Max.

In the dialog, type the property-value pairs you want to define. Once the file is exported from Max and then imported into Director, you may access the properties via the userData property of the appropriate 3D model. Try this in the message window:

put member(1).model[2].userData

-- [#pPsych: "[5,5,5]", #pSpin: "1", #pWeapon: \

" #flamethrower", #pAllegience: " 1"]

This feature alone would be great, but the userData property supports both get and the set access. Type the following in the message window:

member(1).model[2].userData.pWeapon =3 #hamsterCage

-- [#pPsych: "[5,5,5]", #pSpin: "1", #pWeapon: \

#hamsterCage, #pAllegience: " 1"]

While you are thinking about it, notice that all the values have been converted to strings within Lingo. Later you'll see me restore integers and lists to their correct data types by using the value() command. It will save you some frustration if you lock this trivia into your brain now. I've included the 3D Studio Max file on the CD-ROM as well, so if you have Max, feel free to open it up and putter around (bbArena.max). Even if you don't have 3D Studio Max or another authoring program, you may still use the model's userData property.

The model[2] referenced in the previous command is bb, the player's 3D personae. Each of the character models has four properties defined within the userData property: pPsych, pSpin, pWeapon, and pAllegience. These properties store information about the individual that helps the game engine to make decisions at runtime. The most important and least immediately apparent of these is the character's initial psychological state. I created a mathematical model designed to represent the emotional states of the characters in the game. This is most easily explained with the assistance of Figure 3.5.

Figure 3.5 This diagram illustrates the conversion of discreet psychological states into a numerical index.

The pPsych States

It would be silly to try to develop too extensive a model for such a simple game, so I kept things very basic. The numeric model of the psychological state of any given character is essentially broken into three integers used to represent the emotional, intellectual, and moral states of the character at any given moment.

It was essential that these elements be reduced to numbers in order that we could evaluate the effects of one character on the next.

The emotional state is the most important of these, and the only one that I was worried about expressing, although the other elements certainly contribute to the model.

Each state is expressed as a number between –50 and 50. The farther an intelligent object's composite state moves away from zero, the more agitated that object becomes. Negative numbers are used to express negative emotions, curiosity, and moral states, with escalating degrees of irritation. Positive numbers are used to express positive emotions, curiosity, and moral states, with escalating degrees of stimulation. These values are then averaged in order to obtain an overall psychological index that may be expressed by the object.

The manifestation of these emotional outbursts includes changes to the facial expression, shader coloring, opacity, and size of the character models within the space. Strong negative emotions can cause a character to attack, regardless of its affiliation. Strong positive emotions can lead to love, obsession, and an absurd need to follow the player's onscreen representative wherever it goes.

How the Game Lingo Handles Emotions

The Lingo that handles these emotional shifts is broken into two parts. First, a series of mathematical expressions averages the values of each model's three psychological indicators and then pulls each model's indicator values out to the more stimulated zone or back toward the less stimulated zone, depending on whether the model was a friend or an enemy. The easiest way to accomplish this mathematically is to convert the list of three numbers into a Lingo vector. This way, I can perform vector math functions on items from more than one list.

The second part of the handler is an enormous case statement. The case statement converts the character's overall psychological index (the averaged value of all three psychological components) into action. In this design, psychological states are converted into action without regard for history, prediction, or motivation. It is not difficult to see that these elements could be added to a structure like this in order to compound the dynamism of the character and increase the sophistication of its choices. A portion of this statement is found in Listing 3.1. It's the section that handles the reaction to friendly characters (the blue battle balls).

Listing 3.1 A Section of the evaluateStimulus Handler

 1:   ------------------------------------------------FRIEND REACTION LOGIC
 2:      1:
 3:       -- this is a friend
 4:       case TRUE of
 5:         ----------------------------------------NEGATIVE STIMULI
 6:        (tMyAgitation > -51 AND tMyAgitation < -41):
 7:         --- the character is furious, ruthless and scrutinizing
 8:         me.mEmote(#fury) -- emote fury
 9:         me.mAttack(whichModel, #nonLethal, \
          ((tMyAgitation) * -1), tMyWeapon)
10:         --- attack the other character without causing serious harm
11:         --- set the intensity of the attack \
          based on the level of agitation
12:         me.mChangeColor(rgb(255,0,0))-- change color of the shader
13:         me.mHide(100)--set the opacity to full
14:         me.mspeak(whichModel, tMyAgitation)--talk to the model you hit
15:         me.mAttract(whichModel, 10)--move toward the model that you hit
16:        (tMyAgitation > -42 AND tMyAgitation < -33):
17:         --- the character is malevolent, angry and tampering
18:         me.mAttack(whichModel, #gestural, \
          ((tMyAgitation) * -1), tMyWeapon)
19:         me.mEmote(#anger)
20:         me.mChangeColor(rgb(205,90,90))
21:         me.mHide(100)
22:         me.mAttract(whichModel, 10)
23:         me.mspeak(whichModel, tMyAgitation)
24:        (tMyAgitation > -34 AND tMyAgitation < -25):
25:         --- the character is cruel, frightened, and meddling
26:         me.mAttack(whichModel, #gestural, \
          ((tMyAgitation) * -1), tMyWeapon)
27:         me.mEmote(#frightened)
28:         me.mHide(40)--set the opacity to 40%
29:         me.mAvoid(whichModel, 15)--run away \
          from the model that you hit
30:         me.mChangeColor(rgb(50,125,50))
31:         me.mspeak(whichModel, tMyAgitation)
32:        (tMyAgitation > -26 AND tMyAgitation < -17):
33:         --- the character is miserly, depressed and prying
34:         me.mEmote(#crying)
35:         me.mHide(100)
36:         me.mChangeColor(rgb(0,0,255))
37:         me.mAvoid(whichModel, 5)
38:         me.mspeak(whichModel, tMyAgitation)
39:        (tMyAgitation > -18 AND tMyAgitation < -9):
40:         --- the character is impatient, concerned and indifferent
41:         me.mEmote(#worried)
42:         me.mChangeColor(rgb(125,125,125))
43:         me.mHide(100)
44:         me.mAttract(whichModel, 10)
45:         me.mspeak(whichModel, tMyAgitation)
46:         -------------------------------------------------STASIS
47:        (tMyAgitation > -10 AND tMyAgitation < 10):
48:         --- the character is balanced, neutral and content
49:         me.mHide(100)
50:         me.mChangeColor(rgb(55,225,225))
51:         me.mspeak(whichModel, tMyAgitation)
52:         me.mAttract(whichModel, 10)
53:         ------------------------------------------POSITIVE STIMULI
54:        (tMyAgitation > 9 AND tMyAgitation < 18):
55:         --- the character is tolerant, satisfied and alert
56:         me.mEmote(#content)
57:         me.mHide(100)
58:         me.mChangeColor(rgb(55,225,225))
59:         me.mspeak(whichModel, tMyAgitation)
60:         me.mAttract(whichModel, 10)
61:        (tMyAgitation > 17 AND tMyAgitation < 26):
62:         --- the character is generous, hopeful and searching
63:         me.mEmote(#hopeful)
64:         me.mHide(100)
65:         me.mChangeColor(rgb(155,0,225))
66:         me.mspeak(whichModel, tMyAgitation)
67:         me.mAttract(whichModel, 10)
68:        (tMyAgitation > 25 AND tMyAgitation < 34):
69:         --- the character is kind, happy and inquisitive
70:         me.mEmote(#happy)
71:         me.mHide(100)
72:         me.mChangeColor(rgb(225,225,0))
73:         me.mspeak(whichModel, tMyAgitation)
74:         me.mAttract(whichModel, 10)
75:        (tMyAgitation > 33 AND tMyAgitation < 42):
76:         --- the character is benevolent, loving and curious
77:         me.mEmote(#love)
78:         me.mHide(100)
79:         me.mChangeColor(rgb(200,0,200))
80:         me.mspeak(whichModel, tMyAgitation)
81:         me.mAttract(whichModel, 30)
82:        (tMyAgitation > 41 AND tMyAgitation < 51):
83:         --- the character is selfless, obsessive and vigilant
84:         me.mEmote(#obsessed)
85:         me.mHide(100)
86:         me.mChangeColor(rgb(100,0,100))
87:         me.mAttract(whichModel, 100)
88:         me.mspeak(whichModel, tMyAgitation)
89:       end case
90:       ------------------------------END FRIEND REACTION

There is a similar case statement in the parent script of the characters that handles enemies. It is a bit more violent, featuring #lethal attacks and more aggressive attraction.

Each character also has an mRelax() method designed to return it to its natural state. This ensures that characters don't get more stimulated than they should. It also prevents a world full of satisfied and content little critters.

Characters are able to receive stimuli, because they see, hear, and feel. Well, not really, but they do have methods that support these concepts logically. If a character speaks to another character, the receiving character is sent an mHear() command. If someone attacks another character, the victim is sent an mFeel() command, and if a character spots another character, an mSee() command is sent.

There are no methods for smelling or tasting, because there is nothing in this game to eat or smell. However, I don't think such methods would be out of the question, especially if you wanted to simulate a fairly complex world.

The Score

Now that you have a sense of the pieces that make up the cast of this game, let's take a look at the score. Open the score (see Figure 3.6) of the movie in Director and rewind.

Before we go any further, I want you to notice the sprite in channel 10. It's a copy of the 3D world that has been moved offstage. Now before you assume that I've lost my faculties, let me explain. The behavior of the 3D member can be less than 100-percent reliable if the member hasn't fully downloaded and settled into memory before the movie starts to play. It is always a good idea to use the 3D preloader provided via the Publish dialog to prevent this sort of unreliability, but this alternative is a good safety net in the event that you are less certain about the manner in which your movie will be viewed.

Figure 3.6 The score of Battle Ball.

Director loads cast members in order of use, unless you specifically change the cast preload property. Director loads cast members in order of use (unless you specifically change the cast preload property), which encourages the Shockwave player to become aware of our 3D world and lets me rest a bit easier.

There are four markers in the movie. The first marker is used to identify the staging area, where the movie verifies the state of the w3D member and initializes the objects. The second marker defines the game space. The game loops on a single frame until the player either wins or loses. If the player wins, the playback head advances to the third (win) marker; otherwise, it advances to the fourth (lose) marker.

The first frame script, called conditionalLoop, simply waits for the w3D member to reach its fully loaded state and then moves the playback head into the staging frame. The movie is only there for an instant, just long enough run the initObjects() handler, initializing the character and narrative objects. Afterward, the playback head moves on to the third frame. Then the system waits for the w3D member to reach the fully loaded state and give the user a visual signal that the program is working behind the scenes to prepare the game.

In this case, the progress bar is essentially decorative. Its only real purpose is to delay the user for a second, in case something has gone awry with the 3D model.

From here the playback head jumps to the fifth frame. This is the game screen. Its frame script contains a simple looping command, which is shared by the win and lose frames.

The only remaining behavior script that is not tied to the game play section of the movie is a simple mouseUp script that is shared by the splash, win, and lose sprites. It sets the cursor to a finger and moves the playback to the game zone if the w3D member is ready.

Most of the sprites used in the game play section of the score have simple behaviors that assist them in communicating with the objects in the movie and the models in the w3D cast member.

Several of the text members use the windowWasher behavior to clean their members after enough time has passed for the player to read the text.

The progress bar uses the lifeForce behavior to find and translate the narrative-management object's record of the player's health into a real-time display of diminishing life.

The w3D sprite hosts two behaviors. The first one is a simple camera behavior that follows a model. The second behavior checks the keyboard input and moves the player's onscreen model as directed by her keyboard commands.

The initObjects script does several important jobs that get our 3D world rolling. I've placed it here for your convenience. The code begins with the declaration of global variables that will be used to provide access to the 3D models and the narrative-management object:

 1: global w
 2: global co_blades
 3: global co_bogart
 4: global co_spike
 5: global co_mash
 6: global co_punwu
 7: global co_bb
 8: global co_lags
 9: global no_nar
10: global faceTextureList

The global w is a reference to the 3D member. The 3D member is itself an object, and as we go along you may notice that working with it is a lot like working with our other objects. Each of the globals preceded by the letters "co" will hold a character object. The no_Nar global is a reference to the narrative-management object, and faceTextureList will hold a property list describing the keyboard equivalents of each emotional state.

When the movie begins, I clear the globals for good measure and assign member(1) to the global w:

12: on startMovie()
13:  clearGlobals()
14:  w = member(1)
15: end

Once the playback head leaves the second frame, an initObjects command is issued. Because this is a Movie script, it finds a handler here and things really get moving. First, objects are created for each character. The parameter passed after the script name is the index number of the model in the 3D world. The object will use this reference to maintain a hook on its representative model. Here's the code:

17: on initObjects()
18:  co_blades = new(script "oCharacter", 7)
19:  co_bogart = new(script "oCharacter", 6)
20:  co_spike = new(script "oCharacter", 5)
21:  co_mash = new(script "oCharacter", 4)
22:  co_punwu = new(script "oCharacter", 3)
23:  co_bb = new(script "oCharacter", 2)
24:  co_lags = new(script "oCharacter", 1)
25:  no_nar = new(script "oNar")

After the objects are born, I add each one to the actorList. This is because objects on the actorList receive a stepFrame event, telling them that the playback head has moved a frame. Here's the code:

26:  add(the actorlist, co_blades)
27:  add(the actorlist, co_bogart)
28:  add(the actorlist, co_spike)
29:  add(the actorlist, co_mash)
30:  add(the actorlist, co_punwu)
31:  add(the actorlist, co_bb)
32:  add(the actorlist, co_lags)
33:  add(the actorlist, no_nar)


The stepFrame event is a special event that is only sent to objects that reside in the actorList. Every time the playback head advances a frame or an updateStage() command is issued, the objects in the list receive an event. This means you can disable the update messages for an object simply by removing the object from the actorlist.

Now I set the faceTextureList and update the faces of all the models using the images I generate via imaging Lingo. All the possible emotional states are generated here as textures, and then the models may use whichever one they need without influencing the textures used by the other models. Here's the code:

34:  faceTextureList = [#obsessed:"y", #love: "s", \
   #happy: "a", #hopeful: "d", #content: "u", #worried: "g",\
   #crying: "l", #frightened: "o", #anger:"q", #fury: "r"]
35:  repeat with iterTextures = 1 to count(faceTextureList)

Adding the textures to the world is a fairly simple matter. First, I set the text of a rich-text cast member to the value of the nth item in the faceTextureList. Then I create a Lingo image object simply by assigning the image property of the text member to the variable ImageObject.

Once the image is ready, the texture is created using the newTexture command. The command requires three parameters: a string representing the name of the new texture; a symbol, either #fromCastmember or #fromImageObject, telling the command what type of resource you plan to use; and either a Lingo image object or a cast member reference, depending on which type you specified. You are commanding the 3D member (remember, it is an object) to create this new texture for you, so the dot syntax approach says that you should reference the world w and then issue a command, like so:


Note that there are parentheses around the parameters for the command. If you think about it, the only difference between a standard command and the object command is that you reference the object followed by a dot and then issue the command:

36:   member("smiley").text = faceTextureList[iterTextures]
37:   ImageObject = member("smiley").image
38:   w.newTexture(string(getPropAt(faceTextureList, \
iterTextures)), #fromImageObject, imageObject)
39:  end repeat

Once the handler has done this for all 10 items in the list, it resets the text member to a blank display, because we're going to use it for another purpose now that this chore is done. We created textures, but they are just little texture objects without homes at this point. They are also not visible, because textures need to be assigned to shaders in order to be seen. Wrap your head around it this way: A modelResource needs a model; a model needs a shader; a shader can have a texture. So, let's work on those shaders. First, we know that we want to change the hue of our battle ball models during the game. You can't change the hue of a texture, but you can blend the diffuse shader color with the texture if you turn on the model's useDiffuseWithTexture property.

In this next bit of code, I turn that useDiffuseWithTexture switch on and then move on to the faces of each character that are called by the same names, but with the addition of the word Face. Under this system, bb has a face model called bbFace. Using naming conventions like this helps me to reference models with repeat loops rather than working through individual statements or checking all the models for the one I need.

Finally, I want to be able to see that face regardless of whether I'm in front of the model or in back, so I switch the visibility of each face model from the default, #front, to #both. Here's the code:

40:  member("smiley").text =""
41:  repeat with iterModel = 1 to 7
42:   w.model[iterModel].shader.useDiffuseWithTexture = 1
43:   myFace = w.model[iterModel].name&"face"
44:   w.model(myFace).visibility = #both

Now this may be a bit strange to imagine if you haven't got a copy of 3D Studio Max, but parts of the animation that you see in the game are prerecorded keyframe animations, and parts of the animation are controlled via Lingo. Because the local coordinate systems of the spheres vary, and their direction is dynamic due to the spin, I wanted to take advantage of the relative stability of the faces to handle the bulk of the model's motion. You might want to visualize this as models that are dragged around by the face.

In this next section, I attach the character models to their faces as children of the faces. This has some cool side effects. If I move the face via code, the body will come along for the ride. In fact, it will go wherever I take the face and match the face's rotation as well, regardless of the orientation of the child (character body) as it spins.

Because I have a natural aversion to math, I liked this solution: The addChild() command is supported by model objects. This is not a slip of the tongue. I don't mean the character objects that we created moments ago. I mean the model objects that are inherent in the 3D member object. This should be starting to echo in your head. Most properties of the 3D member are really objects. Just as the 3D member has methods, these child objects masquerading as properties have methods that they support. Some of their properties are, in turn, objects as well.

Therefore, the addChild() method is a method of 3D models. Because the model is a child of the world, you need to reference first the world and then the model. The dot syntax approach looks like w.model[x].addChild() or w.model("ModelName").addChild(). This method requires only one parameter, but it will accept a second. The first parameter, which must be included, is a reference to the child model.

To reference that model, you'll need to start with the world and work down to the model. The second argument is a symbol that instructs the function to preserve the world-relative position of the model—or preserve the parent-relative coordinates of the model. I know that's hard to visualize, so bear with me as we look at an example.

A room has a stand, a lamp, and a couch. The couch is near the south wall, the stand is near the west wall, and the lamp is on top of the stand. Each of the models is a child of the world. If I want the lamp to remain with the stand no matter where the stand is moved, it would make sense to make the lamp the child of the stand. I would be in good shape if I used #preserveWorld as my argument, because the lamp would not move when the child is added to the stand.

If I used the #preserveParent argument, on the other hand, the lamp would move west by the exact distance that the stand is from the center of the room. Were there any gravity in the room, the lamp would crash to the floor and break. If the lamp's original transform.position was vector(0,0,30), its new transform.position would still be vector(0,0,30). The difference is that the measurement of the vector doesn't originate at the center of the world. Now it would originate at the stand.

Next, I set things off on the right foot by aiming the models at the player using the interpolateTo command. Finally, I apply the "content" texture to each model's shader:

45:   w.model(myFace).addChild(w.model[iterModel], #preserveWorld)
46:   if w.model[iterModel].name <> "bbFace" then
47:    w.model(w.model[iterModel].name&"Face").\
     transform.interpolateTo(w.model("bbFace").transform, 20)
48:   end if
50:   w.model(myFace).shader.texture = w.texture("content")
51:  end repeat

I start the background sound and tweak the volume. Then I fine-tune the shaders on the cage and on our primary character model:

52:  sound(8).play(member("bkLoop"))
53:  sound(8).volume = 125
54:  w.model("Sphere01").shader.emissive = rgb(255,255,255)
55:  w.model[2].shader.emissive = rgb(255,255,255)
56: end

The last element of the startup scripts is the stopMovie handler. I like to reset the world with the resetWorld command on the stopMovie event. This prevents you from getting thousands of errors as you work in the development of your game and keeps you from deluding yourself about saving any of the changes that you make to the w3D world via Lingo. Next, I reset the actorList and then clear the globals:

58: on stopMovie
59:  w.resetWorld()
60:  the actorList = []
61:  clearGlobals()
62: end

This script works to set up the space so that the object's scripts may gain full control over the activity within the 3D world. With relatively little effort, we've got characters that are able to move, change appearance, attack, avoid, chase, speak, think, relax, and express emotions. Each one can communicate with the player's character and have conflicts with one another and the player's character.

Perhaps most importantly, the conflict experienced is the source of the communication and the only channel for narrative exposition. The characters negotiate with one another through emotional exchange and through attacks of varying intensities. You should be able to see the framework here for more extensive implementations of these concepts.

The other parent script in Battle Ball is the parent for the narrative-management object. Unlike the character objects, the narrative-management object has no physical representation. Only one object is created from this script.

The narrative-management object handles the special events and properties that control the complexity of the game. In this challenge/reward plot model, we want the game to become more difficult as the player experiences more elements.

If the player succeeds in coaxing magic out of one of the other characters, the narrative-management object alters the level (#pLevel) of the game. The background music accelerates, and the characters are allowed to move faster. The attacks grow more punishing and the degree of attraction and repulsion characters "feel" for one another grows more severe.

Think about these factors in terms of our staircase of plot. There is an event, born of conflict (the player negotiates emotionally with the character), in order to get the magic. The instant that the challenge is met, the stakes are raised. The player can see the cage lower toward the floor, indicating that he has moved closer to freeing his trapped comrade, and clues from his journey have probably prompted him to realize that this is his overarching goal.

Each time the player obtains more magic, the cage moves lower and the magic particle fountains change color. The pitch of the background music accelerates, and the player is left with a sense of both reward and urgent renewal of the challenge.

Characters are based on people. Even characters that aren't human are generally anthropomorphic (assigned human characteristics). People want things—things such as safety, shelter, love, group membership, and self-confidence—and rarely more purely noble things.

The way that these things appear in life is in pursuit of abstract things that are rooted in these fundamental needs. In order for characters to really interest a player, they should demand something from the player.

Conflict and negotiation are more interesting than passive exposition and finger twitching. We don't know what the outcome of a negotiation will be. The more strategy options the player controls, the more engaged she will become in the narrative of the game. Even during cooperation, characters need to negotiate. Aggressively opposite characters will make this journey toward conflict an easier one to take.

In the next chapter, we begin to look at the many ways we can integrate dynamic control into the various 3D elements within a game. We'll explore cameras, lights, sounds, and even shaders that are meant to dynamically self-modify in order to enhance game play and the overall entertainment experience

  • + Share This
  • 🔖 Save To Your Account