Data Manipulation Basics

From HLKitWiki
Revision as of 09:54, 17 November 2008 by Rob (talk | contribs) (→‎Scripts)
Jump to navigationJump to search

[Context: HL Kit … Basic Concepts and Terminology]

Overview

In order to handle the complexities of every game system, the HL engine is a very sophisticated tool. However, we've invested significant time and effort to keep things as simple and streamlined as possible. The net result is that the Kit makes extensive use of a few powerful mechanisms that give you complete control over both how data is manipulated and when it is manipulated.

Given the volume of information involved in many game systems, data manipulation entails the proper sequencing of tasks. The first key mechanism is the evaluation cycle that governs when the data is manipulated. The second key mechanism is a fast and flexible classification system, called tag expressions, to identify which objects are to be manipulated and which manipulations should be applied to them. The final key mechanism is the scripting language that allows the actual manipulation of the data.

Each of these mechanisms is introduced in the sections below. An in-depth discussion of these mechanisms is provided in separate sections of the documentation.

Evaluation Cycle

In order to ensure that all data manipulation operations are applied in a correct and consistent order, the HL engine enforces a strict sequence of evaluation that you completely control as data file author. This sequence is triggered repeatedly as the user makes changes to characters, and it is referred to as the "evaluation cycle".

Whenever the user takes any action, HL automatically re-evaluates all facets of the portfolio that are impacted by the change. This updates all dependencies on the user's changes. For example, if the user modifies an ability score in the d20 System, all linked skills and dependent weapons are updated to reflect the impact of the change. To safeguard against lag time in the user-interface when the user takes multiple actions in rapid succession (e.g. clicking the '+' button to increment a skill rating a dozen times), HL waits until the user pauses for a moment before it initiates a new evaluation cycle on the portfolio.

Everything that is performed by the Hero Lab engine is done in a specific sequence, and the majority of actions occur during the evaluation cycle. Each of these individual actions is referred to as a "task", and the complete set of tasks comprising the evaluation cycle is referred to as the "task list". For virtually every task, the data file author controls the evaluation sequence by designating when each task should be processed. There are two criteria used to determine the scheduling of a task: phase and priority.

For each game system, the data file author defines a set of phases that dictate the general sequence in which evaluation is performed. Each phase typically corresponds to a logical step in the overall evaluation cycle, such as "initialization", "before level-based calculations", "after attribute modifiers", etc. All phases are ordered, thereby dictating the sequence in which the phases are processed during the evaluation cycle.

Every task is assigned a phase during which it will be evaluated. All tasks is also assigned a priority, which controls the order in which tasks within the same phase are processed. If two or more tasks have the exact same phase and priority, then the engine uses a number of rules to order them. If the two tasks are still scheduled for the same time, the engine is free to schedule them in whatever sequence it finds convenient, and this order may change from one evaluation pass to the next. Consequently, assigning the correct phase and priority is often critical to ensure that modifications are applied before or after subsequent tests are made that rely on those modifications.

NOTE! When the evaluation cycle begins, it continues until completion. This is usually transparent to the user, but it can become noticeable on older (i.e. slow) computers when the data files are highly complex. Therefore, it's best to utilize tag expressions whenever possible to limit the number of objects that must be processed during evaluation. Similarly, its typically faster to use tags instead of scripts when possible, because they are significantly faster to execute.

Tag Expressions

Tags form a fundamental building block upon which much of HL is constructed, and tag expressions are where they become of critical importance. Since the vast majority of objects you'll be managing are things and picks, there must be a way to identify the proper subset of these objects that apply to a particular situation. For example, attributes, skills, and weapons are used in completely different ways, so you want to keep them separate from each other – yet they are all things (or picks). The solution is to assign tags to each object and then use a tag expression (or tagexpr for short) to identify the subset of objects that apply in a given situation. A major (separate) section of the documentation is dedicated to the subject of tag expressions, but a brief overview is valuable at this point.

A tag expression is essentially a filter that gets applied to all objects of particular type (e.g. things or picks) and selects only the ones that meet the specified criteria. Tag expressions are Boolean expressions, which means they evaluate to a simple "true" or "false" result. They examine all of the assigned tags and determine whether those tags satisfy the expression or not. Separate criteria can be combined, allowing you to require that multiple criteria all be met, one of a set of criteria be met, certain criteria be excluded, or some combination thereof. For example, a tag expression could test whether a thing has the tag "Elven" from the "Language" group. Or a more complex tag expression could test whether a thing has the "Language.Elven" tag and also has either the "Race.Elf" or "Race.HalfElf" tag.

Since tag expressions can utilize full Boolean logic (i.e. "and", "or", "xor", "not") and can even extract and test numeric values from tags, tag expressions can model extremely complex conditions without difficulty. The bottom line with tag expressions is that they provide a powerful and flexible method for quickly determining whether to include or exclude an object, and they are based exclusively on the set of tags assigned to that object. As such, they are used extensively throughout HL.

Scripts

HL makes extensive use of scripts to allow the data file author substantial freedom and flexibility. In fact, scripting is such a fundamental and diverse topic that huge sections of the documentation are dedicated to various facets of writing scripts. This section merely provides a brief overview.

The scripting language syntax within the Kit is relatively simple. You can declare variables, assign values, perform simple conditional tests, and utilize a number of built-in intrinsic functions for various purposes. There is also a syntax that allows access to all of the objects within a given actors, such as the various picks and containers, plus the field values and tags that may be assigned to them. The language syntax itself is somewhat similar to the age-old Basic language. Using scripts, an author can pretty much do whatever is necessary to properly model the requirements of a given game system.

To make the writing of scripts easier, the Kit supports re-usable procedures. A procedure is nothing more than a mini-script that can be called from multiple places. The data files for each game system make extensive use of procedures so that many scripts can be reduced to simply calling one or two procedures to do all the work.

However, you will find yourself re-using pre-defined procedures all over the place as you add your own custom material. Complete details on procedures are covered in the chapters dedicated to covering scripts.


Eval Scripts

The one type of script that you will almost certainly find yourself writing most often is the eval script. The reason for the term eval script is that these scripts are evaluated as tasks during evaluation processing. As such, each eval script must be assigned an appropriate phase and priority during which the script will be processed.

Every eval script is associated with a thing, but a given eval script can derive from two different sources. It is common to define a small number of eval scripts as part of every component. Each such eval script is automatically inherited by every thing that derives from that component. It is also possible to define additional eval scripts for individual things.

Since an eval script is performed during evaluation processing, a separate task is always created for each eval script. This task can then be scheduled by the engine. However, tasks are not associated with things. If a thing is added to a container multiple times, each eval script must be processed separately for every pick. Consequently, separate eval script tasks are created and scheduled for every pick.

Eval scripts are the most commonly used script due to the ability to schedule them. So many facets of a complex role-playing game system are closely inter-dependent. Dependent calculations must be performed in a carefully ordered sequence to ensure that all of the game mechanics are accurately implemented within the data files. Eval scripts provide the means for this scheduling.


Eval Rules

As a companion to eval scripts, Hero Lab also supports eval rules, and you will likely find yourself writing a fair number of these as well. Just like eval scripts, eval rules are scheduled as tasks and performed with specific timing during the evaluation process. Standard validation rules are performed after all evaluation processing is completed, which can often be quite restrictive, thereby making eval rules a valuable resource for data file authors.

Eval rules are a hybrid of eval scripts and validation rules. Since eval rules are tied to things, they are always scheduled as tasks for picks, so all eval rules effectively possess a scope of the specific pick to which they are associated. This might seem limiting at first, but it's actually exactly what you'll want for at least 95% of the rules you'll write for a game system.

Another key facet of eval rules is that they must be assigned a message and an optional summary to be reported to the user if the rule is not satisfied. Like a normal validation rule, the message is the text shown in the validation report within Hero Lab that tells the user what's wrong with the hero. The summary is the text shown in the validation bar at the bottom of the main window, which defaults to the message text if left blank.