5. Discussion
Composite learning tasks consist of a three term contingency between the organism's behavior (B), the perceived stimuli (CS) and the reinforcer (US). In this study, the unique advantages of the Drosophila flight simulator allowed for a comprehensive investigation into the contributions of the single associations to a composite learning task. The central question in this study is: what associations are formed in Drosophila at the torque meter? In the replay experiment, the effect of operant behavior (B-US and B-CS associations) in operant pattern learning (a composite task) was assessed. In the transfer experiments, the possible formation of genuinely classical (i.e. behavior independent CS-US) associations during composite operant conditioning was examined. This was accomplished by developing a new composite learning paradigm, sw-mode. Using this new paradigm, the impact of the classical (CS-US) associations on learning performance in a composite learning situation was investigated. Finally, the question what associations are formed if more than one CS-US association is allowed (i.e. the properties of visual memory acquisition in Drosophila) was explored for the first time in an explicitly composite learning task.

5.1 Contributions of single associations to Drosophila visual learning

5.1.1 The operant component: B-CS and B-US associations
It is conspicuous that operant pattern learning (B, CS and US) leads to a significant learning score already after 8 minutes of training, whereas it takes 16 minutes of classical replay training (CS-US) for the flies to reach a significant performance (Fig. 4a, b). On the other hand, classical pattern learning (Fig. 4c; Brembs, 1996; Wolf et al., 1998) yields learning scores as high as operant pattern learning already after 8 minutes. Estimating the amount of reinforcement by multiplying the temperature of the IR-beam with the time the flies spent in the heat, it appears that classical learning roughly parallels energy uptake during training (Table 2). Most importantly, for similar learning scores this energy uptake is considerably larger during classical than during fs-mode operant learning. Compared to the total amount of heat the distribution and duration of hot and cold periods as well as the dynamics of pattern motion seem to be of minor importance for the learning success. Thus, to reach the same learning scores for the same pattern recognition task, composite operant conditioning requires less reinforcement than classical conditioning.
  Time in the heat Est. Energy PI N
composite operant fs-mode 0.7min 41.1 0.45 30
operant ext. fs-mode 1.5min 85.7 0.47 30
operant sw-mode 2.0min 118.0 0.42 70
pure operant torque learning 2.4min 140.5 0.29 30
classical yoked control 0.7min 41.1 0.15 30
classical ext. yoked control 1.5min 85.7 0.25 30
classical rotatíng classical 4.0min 168.0 0.43 36
Table 2: Reinforcement times and estimated energy uptake in operant and classical conditioning at the flight simulator. Data have been pooled from both my diploma and my doctoral thesis In the operant training, the time each fly spent in the heat was calculated from the individual avoidance scores. The amount of energy taken up by each fly (in relative units) was estimated using the temperature measured at the point of the fly and multiplying it with the time the fly spent in the heat. PI - learning test after the last training. N - number of flies.
What makes the operant training more effective than the classical one? In principle, the operant behavior may either act during association acquisition, support memory recall or exert its effect in both phases. In the first case, only pattern memory (i.e. the CS-US association) is formed, if there are no behavioral (i.e. B-CS or B-US) associations detectable during retrieval. This genuinely classical association formation must then be facilitated by operant behavior as it takes more training to form during classical training. In the second case, there are additional operant (B-US or/and B-CS) associations formed that act additively on memory recall. A first investigation (Guo et al., 1996) found experience in the flight simulator prior to training (i.e. B-CS associations) to positively affect learning scores after training. In the present setup, however, some of these results could not be reproduced (data not shown), most likely due to the smaller coupling coefficient (see MATERIALS AND METHODS) between yaw torque and angular arena speed used here. Anyway, if there were any B-CS or B-US associations contributing additively to the CS-US association to improve pattern learning in fs-mode, a comparative analysis of the micro-behavior prior to training and both after operant and classical training should reveal any such effects. The evidence from Brembs (1996) provides not the least support for this alternative. Admittedly, these negative results do not entirely preclude that such differences still exist, hidden in the temporal fine structure of yaw torque modulations. However, as the advantage of the operant training is large (Fig. 4) one would expect the behavioral strategy providing it to show saliently in the torque traces. 
5.1.2 The classical component: CS-US associations
For the other option - facilitation of CS-US association acquisition - however, there is positive evidence from the transfer experiments (Fig. 6). The finding that the fly establishes pattern and color preferences while being engaged in one behavior (sw-mode) and later displays them by a different behavior (fs-mode) supports the notion that conditioned preferences are behavioral dispositions (central states) rather than modified motor patterns (for a general discussion of behavioral dispositions see Heisenberg, 1994). The necessity of a familiarization training slightly weakens this conclusion. In principle, the 60s familiarization training in the particular situation after the switch mode could be sufficient to generate the preferences anew, despite the fact that without the preceding sw-mode training it is not. This interpretation is considered unlikely and, instead, the view is favored that recall of the memorized 'classical' association is dependent not only on the sensory but also the behavioral context. In other words, an association might be easier to recall in the behavioral state in which it was acquired than in a different behavioral situation. The asymmetry in the transfer experiments between fs-mode and sw-mode is one of three conspicuous asymmetries that receive in-depth treatment below. 
5.1.3 Is the operant equivalent to the classical component?
Rescorla (1994) suggested that the behavior of the animal might compete with the sensory signals in the animal's search for a predictor of the reinforcer. Unsuccessfully searching for temporal contingencies between motor output and the reinforcer could reduce the efficiency of the CS-US association formation in classical conditioning. Conversely, successful behavioral control of the CS and the reinforcer may increase the acquisition process. Could this be a symmetrical effect? Maybe the efficiency of B-US association formation is also reduced if the animal is searching unsuccessfully for a temporal contingency between a sensory stimulus and the reinforcer? In other words, does a composite operant experiment such as sw-mode yield better learning than a 'purely' operant one as yaw torque learning? Although there is a clear and repeated tendency to show lower learning and lower avoidance scores if color changes are not related to the fly's behavior (Fig. 7b) than when they are (Fig. 7c, d), large variation in these comparatively artificial and difficult experiments prohibits this tendency from being statistically reliable. This is the second of the conspicuous asymmetries that deserve special treatment (see below).
Thus, a facilitating effect of adding components to form a three term contingency has been shown only for the fs-mode (Fig. 4). A replay experiment for the switch mode is still in progress. Research in this direction has been hampered by the condition of our present fly stocks. For unknown reasons, the flies in our department to date show weakened classical learning while operant conditioning seems unaffected. It is probable, however, that also in sw-mode training there is a facilitating effect of operant behavior: in sw-mode training flies are exposed to the heat only for half as long as during classical conditioning and take up roughly 70% of the estimated energy, yet they reach about the same learning scores (PI=0.4).
In fs-mode the fly can not modify a motor program (i.e. form a simple B-US association) according to experience - the motor programs used for choosing certain orientations are all the same whether the orientations are associated with the heat or not. In sw-mode, however, it explicitly has to do so in order to solve the learning task. Moreover, it can learn to modify its yaw torque even without the aid of external stimuli in yaw torque learning. Nevertheless, flies learn to discriminate the visual cues which is demonstrated by a transfer to fs-mode (Fig. 6). Even more surprisingly, this learning seems to block the B-US association that would be formed when the visual cues were not related to the fly's behavior (Fig. 7b). This is another finding corroborating the proposition of Wolf and Heisenberg (1991) that it is important to distinguish between operant activity and operant conditioning. While operant activity controls biologically important stimuli, operant conditioning is an after-effect of operant activity that need not always follow operant behavior. The fact that the same operant behavior (controlling heat with yaw torque) in one case (Fig. 7b) leads to a lasting modulation of that behavior, whereas in the other (Fig. 7f) it does not is exemplary for this distinction. The effect of stimuli preventing B-US associations to form is also reminiscent of the 'overshadowing/blocking of a response-reinforcer association' found in vertebrates (Williams, 1975; Pearce and Hall, 1978; Williams, 1978; St. Claire-Smith, 1979; Williams et al., 1990) including humans (Hammerl, 1993). Those experiments trained rats in an experimental chamber to press a lever several times (B) to obtain a reward (US) after a certain delay. Barpressing (i.e. the B-US association) was found to be reduced when each reinforcement was signaled by a stimulus (CS). In these cases, however, the decrement in operant performance is not surprising as the stimulus always has better predictive value for the reinforcer than the behavior: while not every barpress leads to reinforcement, every stimulus presentation is followed by reinforcement. To my knowledge there is no vertebrate study where behavior and stimulus have been equilibrated for their predictive value. In sw-mode training, however, the contingencies are perfect: every yaw torque sign inversion leads to a change in arena coloration and in temperature. Thus, this profound difference between single and multiple association learning tasks was discovered here for the first time. Why does yaw torque seem to loose its associative strength when it can be used equally well as a predictor for reinforcement as the colors? Possibly the B-US association is not really absent, but maybe weaker and/or incorporated into a new association. One could imagine a sequential B-CS-US association or a hierarchical (B-CS)-US association. After reversing the contingencies between yaw torque domain and colors (i.e. inverting the B-CS relation), the behavior is not modified to avoid the punished color. Instead no yaw torque domain (or color) seems particularly preferred. If at all, a tendency to show the correct yaw torque modulation and to disregard the colors can be observed (Fig. 7i). This fact entices to speculate that the flies may use the instant where the arena illumination changes color as a 'landmark' to signal yaw torque sign inversion (and hence reinforcement), without memorizing any particular color with the heat [i.e. a (B-CS)-US association with the color-switch as CS]. This consideration also sheds more light on the need for a familiarization training when the CS-US component is tested in fs-mode, as the flies might use the familiarization training as a signal for which 'side' of the color switch is reinforced in the new situation. Thus it seems that also the B-CS association contributes to the learning process. Guo et al. (1996) have shown that such processes can indeed occur at the Drosophila flight simulator and that they increase the performance indices. There is evidence that such B-CS learning occurs also in the absence of overt reinforcement (Guo et al. 1996; Brembs, 1996). Such reinforcer independent motor learning could be understood as a basic behavioral tuning mechanism that probably occurs continuously without being much noticed. As it seems not to be specific to the learning tasks examined in this study, it will not become a major focus of this work. In summary, it is conspicuous that behaviors and stimuli are apparently not treated as equivalent predictors of reinforcement (Fig. 7). This consideration will be discussed at length with the other two asymmetries:
5.2 Three conspicuous asymmetries
It was mentioned in the INTRODUCTION that a formal analysis of the three term contingency suggests a symmetrical relation between the components of a composite learning task. Therefore, the three asymmetries in the association analysis after sw-mode and fs-mode training deserve special attention. (i) Why can a color or pattern memory be transferred from sw-mode to fs-mode but not vice versa (Fig. 6)? (ii) If a familiarization training can reveal a single CS-US association out of a seemingly combined association after sw-mode training, why can the same familiarization training not do the same with the B-US association (Fig. 7)? (iii) Why is a composite operant procedure more effective than a simple classical (Fig. 4), but not more effective than a simple operant task (Fig. 7)? 
(i) Obviously, although both sw-mode and fs-mode take place at the torque meter in the same arena and involve operant behavior, they are entirely different. While in fs-mode the choice of flight direction and between the two temperatures depends on the ability to fly straight and, above that, upon a sequence of discrete, well timed orienting maneuvers, in sw-mode it is the actual value of the fly's yaw torque that controls this choice. Moreover, while in fs-mode the fly receives instantaneous feedback on the effect its behavior has on its stimulus situation in sw-mode it can only get this feedback at the point where the experimenter decides to invert the sign of the torque trace. Evidently, fs-mode is less artificial than sw-mode. It is thus easily appreciated that the CS-US association formed in classical pattern learning can be expressed in the fs-mode test without familiarization training (Brembs, 1996; Wolf et al., 1998). Judging from the transfer experiments, one would predict this to be more difficult when the test after classical conditioning were in sw-mode. If that were so, it would corroborate the conclusion from the transfer experiments that in principle operant pattern or color learning facilitates a behavior independent CS-US association and the familiarization training is necessary to overcome contextual effects. One might expect more familiarization training to bring out this association also in the fs-mode to sw-mode transfer (Fig. 6 column III).
(ii) Similarly, one might predict that more familiarization training might bring out the B-US association upon removal of the color filters after sw-mode training (Fig. 7i). The difficulty, on the other hand, to modify yaw torque without reinforcement (or without exactly the same three term contingency as during reinforcement, see Fig. 6 columns II, III, Fig. 7f, h, i) may also indicate that behaviors and stimuli can not be regarded as equivalent (i.e. equally salient) predictors of reinforcement but that there may be a preference to rather add stimuli to a predictor than a behavioral modification. Wolf and Heisenberg (1991) have shown operant behavior to flexibly and very quickly adjust the fly's stimulus situation according to it's desired state. Reducing its behavioral options more permanently in anticipation of reinforcement may be an animal's last resort. In other words: The experiment depicted in Fig. 7 can be perceived as an overshadowing experiment (Fig. 8), where one of the elements in the compound is a behavior (yaw torque modulation) and the other is a stimulus (colors). In this case, stimulus learning (CS-US) overshadows behavioral learning (B-US). Overshadowing is usually described as the difference in the associabilities ? of the two components in equation (2) - the delta rule. Such a difference may be caused by different stimulus intensities. Alternatively, the animal may be predisposed by phylogenetic or individual experience to regard one component as a better predictor of the US than the other. One is inclined to generalize asymmetry (ii) as a difference in associability between behavioral (B) and sensory (CS) predictors of reinforcement (US) if both are available.
(iii) It is surprising that a 'pure' operant conditioning task as yaw torque learning should be just as efficient as a composite task as sw-mode learning (Fig. 7), while classical pattern learning is less efficient then operant pattern learning (Fig. 4). Why should one single association task be less efficient than a composite task while the other is not? For one, from observing the animals one would strongly expect single (i.e. either only CS-US or only B-US) association tasks generally to be less efficient than composite experiments. Second, comparing the amount of heat uptake during training in the various procedures used in this study (Table 2) one can see that torque learning requires roughly the same amount of reinforcement as classical conditioning, while sw-mode training is slightly less efficient than fs-mode learning but still leads to less energy uptake than classical training. Thus, the relatively small difference between yaw torque learning and sw-mode learning can be attributed to the difficulty and artificiality of the sw-mode learning task. 
5.3 A hierarchy of predictors
Apparently, once both stimuli and behaviors with similar predictive value are available during training, they are added to a three term predictor (operant and classical associations). Once one of the three relations is altered, it takes special treatment (familiarization training) to reveal the remaining associations. In contrast to the considerations above (see INTRODUCTION), the individual associations are not equivalent: The amount of familiarization training seems to vary with the component of the three term contingency. Components with high associability (i.e. easily learned) need less familiarization training than those with low associability. Compiling the data so far, one can postulate a hierarchy of predictors. Operant behavior occurring during composite operant conditioning should hardly be conditioned at all (Brembs, 1996, Fig. 7). Classical stimuli that bear no relation to the behavior of the animal should be of intermediate associability, as is operant behavior alone (Figs. 3, 5, Table 2). The relative associability of behaviors or stimuli alone most probably depends on the choice of stimuli/behaviors. Stimuli that are controlled by operant behavior should accrue associative strength most easily, whether the direct B-US can be formed or not (Figs. 3, 5). It would be most interesting to test these predictions in other animals including humans.
5.4 Properties of visual memory acquisition in Drosophila
Having established the paramount significance of the CS-US association in composite operant conditioning, the flight simulator is used for the first time to methodically examine the properties of this single association within an explicitly composite learning situation. Operant visual learning of Drosophila at the flight simulator (i.e. a composite task including B = choice of flight direction via yaw torque, CS = colors and patterns and US = heat) is explored using compound stimuli as CSs. Thus, there are two CS-US associations possible and again the question arises, which associations are formed and whether the relation between them is symmetrical. The overshadowing experiment shows that flies acquire, store and retrieve the two CSs 'colors' and 'patterns orientations' separately. They do not store them only as a compound. Whether they can distinguish the compound from the sum of the components ('configural learning') has not yet been investigated. In contrast to the similar experiment depicted in Fig. 7 this experiment did not reveal any differences in associabilities between the elements of the compound (Fig. 8c, d) and no familiarization training was necessary. Note that in discrimination learning each of the component CSs consists of a CS+ and a CS- (blue and green; upright and inverted T). Dwelling time analysis seems to indicate that for colors and patterns both the CS+ and the CS- are remembered (data not shown). This brings the number of simultaneously stored memory items up to four. Further investigating into the associations formed whenever more than one CS-US association is enabled, blocking, second-order conditioning (SOC) and sensory preconditioning (SPC) experiments are carried out. As these experiments were inspired by the successful development of quantitative learning rules in vertebrates, the results obtained here are compared to vertebrate conditioning data.

5.4.1 Blocking and second-order conditioning

As the associabilities of the two stimuli CS1 and CS2 are generally equal in a blocking experiment (see Introduction), the difference in associative strength after conditioning has to be due to the reinforcement term of the delta rule. If CS1 was trained to predict the reinforcer to 100%, the value for the delta rule equals zero and reinforcement is no longer effective (l-=0). However, associability need not be a constant variable, but might change with conditioning experience as well. In the model proposed by Pearce and Hall (1980) the associability of a stimulus is proportional to l-, while the reinforcement remains constant. In a blocking experiment l-=0, therefore, CS2 is not associated with the US.  Both explanations have in common that the amount of blocking is crucially dependent on the degree to which the pretrained stimulus is recognized in the compound as a predictor of reinforcement. As there is evidence for both associability changes (Holland, 1997) and for changes in reinforcement processing (Schultz, 1995; Kim et al., 1998), one might suspect that both kinds of explanation are not mutually exclusive. Indeed, Holland (1997) suggests that processes modifying both associability and reinforcement are at work. The overshadowing experiment ensured that the two stimuli do not to differ in associability (a) without prior conditioning (Fig. 8). Nevertheless, blocking could not be detected.
In the present experiments the key conditions allowing blocking to occur have been met. Control and experimental groups differed in the predictive value of the compound (Fig. 9). The first training phase caused neither overshadowing nor a large SOC (possibly masking a blocking effect), as the experiments in Figs. 8 and 10 show. Nevertheless, despite varying compound training and control procedures (see MATERIALS AND METHODS) no blocking effect could be detected. While this is one more piece of evidence that blocking might be absent in invertebrates, let us first consider potential other explanations why blocking was not found in this study. 
There are two main basic reasons for blocking not to show up using the flight simulator design: (1) Either some components of the setup or the choice of stimuli principally interfere with an otherwise detectable blocking effect, and/or (2) blocking can not be obtained using the experimental time course used here.
(1) It is argued above that visual learning at the flight simulator is a case of classical learning in which the operant behavior facilitates CS-US acquisition. Although it is considered unlikely it can not be excluded that the operant aspect or any other property of the flight simulator paradigm interferes with blocking (see INTRODUCTION). It could be that the high degree of operant control of the stimuli prevents blocking of redundant stimuli. The extreme behavioral restriction of the tethered animal or the particular choice of stimuli and feedback conditions could be prohibitive as well. Bitterman (1996) argues that blocking can only be shown within and not between modalities (Couvillon et al., 1997). Colors and patterns might be similar to two modalities. It can not be ruled out, but is also considered rather unlikely that any existing, small blocking effect could be masked by the equally small SOC effect.
(2) More importantly, though, the failure to obtain blocking could be due to a significant generalization decrement of the learning upon introduction of the second CS in the compound phase (Fig. 9a). The same rapid extinction of the generalized learning is observed in the SOC experiments (Fig. 10). This quick decay of the memory effect may continue in the presence of the US in the blocking experiment, attenuating the predictive value of the CS1 enough to make the flies near to naive even in the shorter (not shown) blocking experiment. In this case the compound stimulus (CS1+CS2) might be sufficiently 'surprising' (i.e. the value for the delta rule might be sufficiently large) for the new stimulus (CS2) to acquire associative strength. A more extensive (maybe spaced) pretraining (CS1+US) together with other technical measures should decrease the generalization decrement as well as minimize extinction. While it is reassuring that the SOC effect in this study is too small to mask any significant blocking, this fact may indicate that the associative strength of the CS1 after the standard training procedure is too weak to serve as a sufficiently 'safe' predictor in the compound. On the other hand, the larger learning score in the intermittent compound test in the blocking vs. the control group, and the (albeit small) final learning score in the SOC experiment are difficult to reconcile with these arguments. One would at least expect partial blocking, since the compound is, indeed, better predicted in the blocking than in the control groups. As a minimal conclusion, blocking in Drosophila is a less reliable and robust phenomenon than it appears to be in vertebrates.
Even if there is a number of reasons why blocking might be implemented in Drosophila but not detected in this study, the possibility remains that invertebrates do not exhibit blocking. Even though control and blocking groups differed in the predictive value of the compound (Fig. 9), this difference might have been insufficient to reveal blocking not in terms of stimulus generalization, but on principle grounds. Maybe in invertebrates the difference 'naive-conditioned' at the beginning of compound training is not sufficient to induce a difference between experimental and control groups after the CS1+CS2+US training. Evidence that this might be the case comes from a recent study in freely flying honeybees, which currently is the only undisputed case where blocking appears to have been detected (Couvillon et al., 1997). Couvillon and coworkers (1997) pretrain CS1 as conditioned inhibitor during a discrimination training in the control group, whereas it becomes a conditioned excitor in the blocking group. Transferred to the flight simulator paradigm, this would mean that compounding CS2 with CS1 would have to initiate reversal training (e.g. punishment on the upright T in pretraining and on the inverted T in the compound phase). In other words, the difference between blocking and control groups would be maximized by the control animals not being naive (i.e. PI=0 as in the present study) but showing negative learning scores. This, however, would rather indicate an enhancement of associative strength to CS2 in the control groups (i.e. a particularly large value for the delta rule) than a reduction to CS2 in the blocking group (i.e. a particularly small value for the delta rule) and would thus still not show blocking. The necessary naive control group is not shown in the Couvillon et al. (1997) study.
If the still scarce data were to interpreted as a divergence in vertebrate vs. invertebrate learning mechanisms the question is imminent: What makes this elementary property of behavioral plasticity underlying blocking different in vertebrates and invertebrates? It has been argued before that blocking might involve attention-like processes (Mackintosh, 1975b) or some concept of expectation and prediction (Rescorla and Wagner, 1972; Sutton and Barto, 1990). In humans blocking has been implicated with causal judgement (Miller and Matute, 1996). However complex the explanatory concept may be, the proposed neural mechanism (Holland, 1997; Fanselow, 1998; Thompson et al., 1998) seems simple enough to be implemented also in the less complex invertebrate brains. However, vertebrate brains (especially in the intensively studied mammals) are considerably larger than those of invertebrates. Probably their ability to quickly discern essential from redundant or otherwise unimportant events is also much better than in invertebrates. While rats in an experimental chamber might learn that in this situation the delivery of the reinforcer depends solely on one stimulus and nothing else (especially if trained in this chamber for weeks), it appears that for an invertebrate it is more difficult to reach this level of predictive value. One may even speculate that vertebrates may reach such a high level of confidence in the predictive value of a stimulus, that they can afford to ignore redundant stimuli albeit their relation to the reinforcer. In contrast, invertebrates may rely on redundancy to compensate for a larger error-proneness of their central nervous system. It would be very important for our understanding of general brain functioning if indeed different acquisition mechanisms had evolved due to different error rates in vertebrates and invertebrates. Until a satisfying concept of error-rate and reliability of sensory input is developed, however, this idea remains speculative. On the other hand, one needs not assume basically different acquisition processes at work in vertebrates and invertebrates. Indeed the added CS2 is correlated with the reinforcer and it is a matter of cost/benefit balancing whether it is taken into the association or not. This consideration and in particular the fact that in real life there is nothing like a 100% predictor of an event makes it easy to appreciate that the different outcome of the blocking experiment in vertebrates and invertebrates (if the few existing data can be generalized in this manner) may not necessarily reflect the difference in basic mechanisms of learning but rather a difference in the variables and thresholds determining whether a stimulus with a rather small predictive value is added to the predictor or not. The ambiguity in the invertebrate blocking literature supports this view.
5.4.2 Sensory preconditioning
With no blocking and no overshadowing being observed in the present experiments, the only interaction of the two components in the compound stimulus is revealed by the fact that they form a reciprocal association if presented together without reinforcer (SOC, SPC). This is obvious in SOC where the CS1 assumes the role of the US, but also in SPC the preference and avoidance of CS2+ and CS2- (respectively) in the final test reveals that CS1+ and CS2+ as well as CS1- and CS2- have formed specific associations during the preconditioning phase. There are some earlier reports of SPC in invertebrates (Couvillon and Bitterman, 1982; Suzuki et al., 1994; Kojima et al., 1998; Müller et al. submitted). SPC can most readily be perceived as a form of 'incidental learning' where two equally salient stimuli are associated in a symmetrical manner (as opposed to the asymmetric relation between CS or B and the US in regular associative learning). There is ample evidence for the symmetry in this association: Simultaneous pairings show stronger effects than sequential ones in honeybees (Müller et al., submitted) as well as in rats (Rescorla, 1980; Lyn and Capaldi, 1994). Also in zebrafish Hall and Suboski (1995) successfully used simultaneous light-odorant pairings. In mammals even backward pairing leads to excitatory, rather than inhibitory associations (Hall, 1996; Ward-Robinson and Hall, 1996; Ward-Robinson and Hall, 1998). In the flight simulator, the color of the arena illumination is changed exactly between two patterns, providing neither a forward nor a backward relationship between colors and patterns either. This difference between incidental learning (for a review see Hall, 1996) and regular conditioning is no surprise as the asymmetric dependence on the temporal arrangement of CS and US in regular conditioning is reflected by the difference in biological significance between CS and US (for a review on this timing dependence see Sutton and Barto, 1990).
Dill and Heisenberg (1995) have reported one case of incidental learning at the flight simulator called 'novelty choice'. Flies without heat reinforcement remember patterns and compare them to other patterns later. Novelty choice learning seems to be considerably faster than the preconditioning effect observed in this study. In the novelty choice paradigm a one minute exposure already biases the subsequent pattern preference (Dill and Heisenberg, 1995) while in the present experiment a ten minute preconditioning phase is not enough for a significant association to be formed. Hence, establishing a memory template for a visual pattern is a fast process whereas associating different types of sensory stimuli takes more time. The fly probably links pattern orientations and colors during preconditioning because the sudden changes in the color of the illumination are firmly coupled to certain changes in pattern orientation. To detect such coincidences the fly has to compare the temporal structure of the various sensory channels. The same mechanism has recently been postulated also for regular associative conditioning because there too the animal needs to separate the CS from the context (Liu et al., 1999). In both instances, regular conditioning and sensory preconditioning, transient storage of the incoming sensory data, as in the case of novelty choice learning, is probably a prerequisite. This dependence on the amount of preconditioning is observed in rats as well (Prewitt, 1967; Tait et al., 1972), but apparently neither in zebrafish (Hall and Suboski, 1995) nor in honeybees (Müller et al., submitted). In these reports, however, even the smallest amount of preconditioning used led to SPC. It might be that using even smaller amounts of preconditioning would also uncover a gradual increase of SPC with the amount of preconditioning in these animals. Alternatively, decreasing the associability of the stimuli until SPC is lost and subsequently increasing the amount of preconditioning in these altered conditions might reveal the dependence in question. 
In summary, one can propose that incoming sensory data are briefly stored to allow for a search of temporal and spatial coincidences. Memory templates with similar temporal structure are bound together and kept in storage for an additional period of time.
4.5 What is learned in a composite learning situation?
Natural learning situations most often comprise a wealth of stimuli that are at least partly under the control of operant behavior. The number of possible associations that can be made during reinforcement is proportional to the number of stimuli. However, the number of useful predictors is always less than the total number of stimuli present at the occasion. The difficult task lies in finding the behaviors and the stimuli that will lead to proper anticipation of the reinforcing events. In the frog-bee example from the INTRODUCTION it would be most fatal for the frog if it stopped flinging his tongue at all insects after the encounter with the bee. Likewise, it would not be very adaptive if it would cease using its glossa altogether and try to catch the bee with its mouth the next time. Apparently, it is entirely sufficient to memorize the coloration of the prey as being punishing (negative reinforcement) to keep the frog from trying to catch it - the CS has acquired the avoidance eliciting properties of the sting (US). Although an operant (B-US) association might have formed, it is not necessary. In most cases operant behavior will be flexible and fast enough to ensure proper preparatory behavior without or with only little aid of motor learning. Indeed, the results presented here suggest that the B-US associations are at least weaker than the CS-US associations in three term contingencies and may (if present) be better characterized as sequential B-CS-US or hierarchical (B-CS)-US associations (Fig. 7). Moreover, this study has substanciated the prevalence of stimulus learning by showing that it comes to dominate any other association in a complex learning task even though there are equally valid behavioral predictors present (Fig. 7). In contrast, once two stimuli share the same predictive value for the reinforcer both can accrue the same associative strength (Fig. 8), ruling out the possibility that in all learning situations one predictor comes to dominate all others. This seems even to be true if one of the two stimuli bears a weaker relation to the reinforcer (Fig. 9). This is either a difference between invertebrates and vertebrates or a particular property of the experimental design used here. More experiments are required to find out whether invertebrates rely on more predictors than vertebrates. Furthermore, the facilitating effect of operant behavior on this CS-US acquisition process has been shown here for the first time (Fig. 4). As expected, the more natural complex learning tasks are easier to solve than the more artificial, single-association tasks (Figs. 3, 5; Table 2). At the same time a new form of incidental learning was established for Drosophila (Fig. 11), showing that higher order learning forms developed in vertebrates can be successfully applied in invertebrates. Obviously Drosophila at the torque meter is a very good case study showing that no simple, symmetric notion of temporal proximity, but rather a more sophisticated, asymmetric set of rules is guiding the selection which of the predictors present in a composite learning situation are to be stored in memory for later use.