Drawing of students in ACT Lab, by Christian Santander, '20

Professor Forney heads the LMU Applied Cognitive Technologies (ACT) Lab, which provides undergraduates with research experience using state-of-the-art techniques in the domain of artificial intelligence. Although his own specialties focus in the domain of causal inference, he sponsors a wide array of student-led projects, whose details can be found in the section below.

The ACT Lab is located in Doolan Hall 217 and is equipped with modern computing hardware. Any student with interest in pursuing research related to Professor Forney's focuses or with their own projects in mind should read the "Joining" section below, and / or contact him through means in Contact tab above.


The following expresses the ACT Lab Mission and priorities from Dr. Forney's interests, though priorities and interests of student projects (listed below) may vary.

My research endeavors are strongly aligned with the college's encouragement of interdisciplinary explorations, as the core of my scholarly pursuits are found at the intersection of computing, cognitive psychology, and experimental design. To be specific, I primarily study the field of causal inference, which in the context of artificial intelligence and the data sciences attempts to make rich inferences about a system based on assumptions of the cause-effect relationships that govern its variables. The reasoning tools enabled by causality map to many of the higher cognitive skills that humans accomplish with ease, but whose formalization have evaded much of traditional machine learning and AI.

I find the causal framework's prospects the most exciting in the study of counterfactual reasoning: the processes by which inferences are made about realities contrary to those observed. Proceduralizing the computation of counterfactuals provides the underpinnings of important human capacities like innovation, and learning mechanisms like regret. These analogies to the human condition require interdisciplinary efforts from the psychological sciences, which can both inform and be informed by these process' automation.

Although the above broadly motivates my primary line of research, I am also deeply passionate about involving undergraduates in scholarly inquiry, and investigating not only the effective tools in STEM pedagogy, but also in STEM advising, inclusivity, and diversity. My research program at LMU is thus structured into three primary Objectives:

  1. Counterfactual Reasoning in the Furtherance of AI

  2. Enhancement of Effective STEM Education, Advising, and Inclusiveness

  3. Interdisciplinary Applications with Student-Driven Teams


Interested in joining the ACT Lab or have a project you'd like to entertain? See the details below:

Judging Interest

Not sure if the lab's type of work is right for you or you're simply interested in expanding your horizons? Take a look at the following books and see if their ideas excite you; these topics are the chief focuses of the lab, and involve some ideas grounded in disciplines other than computer science! Consider the following an ACT Lab "Starter Kit:"

  1. Thinking, Fast and Slow (by Daniel Kahneman)

  2. The Book of Why (by Judea Pearl, my graduate advisor)

  3. Reinforcement Learning: An Introduction (by Sutto and Barton; read only Chapters 14 - 17 if beginning your journey)

  4. Counterfactual Randomization: Rescuing Experimental Studies from Obscured Confounding (by Forney and Bareinboim)

Project Types

If you are a current student interested in the ACT Lab, the following types of projects may be relevant to your current level of experience; see brief descriptions of each project type below:

Project Type



Investigative / Interdisciplinary

Investigative projects develop a focused area of study, including a nontrivial literature review, formation of arguments or research questions, or examining potentials for interdisciplinary endeavors (which primary interest in interdisciplinary partnership with Psychology). These projects typically lack dense technical or mathematical rigor, but serve as good starting points for these more mature products later.

Example: Examining the literature's treatment of counterfactual reasoning from psychology, philosophy, and computing, and writing a meta-review on their juxtaposition.

  • Desire to perform extracurricular research and possession of a solid work-ethic

  • Basic technical skills, organization of literature using citation manager like Mendeley

Explorative / Mentored

Explorative projects further the agenda of an Investigative one, adding an applied element by way of some simulations, experiment, or other human-subject data / analysis. Although it is not required to have first completed an Investigative lead-in to the Explorative level, the two generally follow a sequence. This is also the level at which students are free to propose their own projects and work in the lab under Dr. Forney's mentorship, even if the project is not strictly in the domains of the Mission above (assuming the proposed project is in Dr. Forney's realm of expertise).

Example: Creating an Amazon Mechanical Turk experiment to determine how humans reason counterfactually in confounded decision-making scenarios.

  • All requirements at previous level.

  • Intermediate programming skills, depending on project -- generally Sophomore level and higher.

  • Intermediate mathematical maturity, capable of analyzing data, performing statistical analysis, and graphing / reporting findings.


Integrative projects... well... integrate all of the above: they are technically challenging, demand interdisciplinary insights, and advanced skillsets that demonstrate mastery of foundational topics in AI / ML as well as the adjacent possible beyond them. Preference at this level is given to projects that explore interdisciplinary endeavors, and in particular, to those that involve some measure of Causal or Counterfactual Inference.

Example: designing a reinforcement learning agent that advises humans or participates within a confounded decision-making scenario.

  • All requirements at previous level.

  • Mathematical maturity and familiarity with concepts of CMSI 3300 (Artificial Intelligence) and 4320 (Cognitive Systems Design).

  • Ability to commit 5 - 8 hours per week on projects and working independently.


If you're interested in joining the lab, you should perform the following steps:

  1. Use the "Judging Interest" resources to see if the lab's work is right for you, and if so, what topics excite you OR put together a compelling project proposal for something ad hoc you'd like to explore under mentorship.

  2. Examine the "Project Types" from the section above, and determine which stage of your research most meets your background, goals, and interests.

  3. Send an email with all of the above to Dr. Forney.

Disclaimer: depending on your interests, skillset, and current lab size, you may or may not be able to join the ACT Lab upon application.

That said, as existing students graduate or complete projects, new spots and projects may open up, to which you can reapply.

Student Projects

A number of LMU's best and brightest have participated in ACT Lab projects, whose descriptions can be found below by clicking on each yellow project box.

Student(s) Project
Axel Browne

Exploiting Causal Structure for Transportability in Online, Multi-Agent Environments. [Published at AAMAS-2022]

Autonomous agents may encounter the transportability problem when they suffer performance deficits from training in an environment that differs in key respects from that in which they are deployed. Although a causal treatment of transportability has been studied in the data sciences, the present work expands its utility into online, multi-agent, reinforcement learning systems in which agents are capable of both experimenting within their own environments and observing the choices of agents in separate, potentially different ones. In order to accelerate learning, agents in theseMulti-agent Transport (MAT) problems face the unique challenge of determining which agents are acting in similar environments,and if so, how to incorporate these observations into their policy.We propose and compare several agent policies that exploit local similarities between environments using causal selection diagrams,demonstrating that optimal policies are learned more quickly than in baseline agents that do not. Simulation results support the efficacy of these new agents in a novel variant of the Multi-ArmedBandit problem with MAT environments.

Andrew Seaman, Joey Ortiz, Nick Morgan, & Tigerlily Zietz

Schedulion: NET Prediction and Scheduling for LMU Athletics

Scheduling application built for the LMU athletics department that uses machine learning from KenPom data to predict team NET rankings, matchups with other teams, and a variety of scheduling mechanics to achieve a high strength of schedule.

Ameya Mellacheruvu, Moriah Scott, Sophia Mackin, Keziah Rezaey, & Thomas Bennett

Beaker: Interdisciplinary Research Connections Made Easier

Web application used to make interdisciplinary research at the university level simpler across department lines.

Elena Martinez, Veronica Pecker-Peral, & Andrew Seaman

Source-specific Biases and Predictors in Online Political News

Examines the propensities of online news readers from different political affiliations to both seek and rate articles as biased as a function of the reported source.

Madelyn Louis, Li Ying Tan, Haley Mech, & Kaitlyn Behrens

Simulated Game Balancing in Settlers of K'tah

Balancing strategic video games can be challenging, especially when there are multiple interworking mechanics. A common way that developers ensure that games are well-balanced is through playtesting, where the game is released to small groups of players who provide feedback on the gameplay. However, playtesting can be resource-intensive, since it often requires many test players and the results are hard to judge objectively. This research sought to determine the value of programmatic simulations, as opposed to playtesting, for determining optimal game balance. The examined game, Settlers of K'tah, is a strategy game wherein players compete to collect resources, build armies, and fight off a zombie horde. To ensure that all victory paths in the game are equally difficult to achieve, Python simulations were written to tweak various in-game parameters such as building costs and battle outcome probabilities. Then, simulated agents competed against each other and the results of each game were collected, including how many points each player earned, the victory path taken, and the duration of the game. The simulation results were evaluated and different parameter settings were assessed based on how evenly-matched the games were. This led to a set of parameterized values that provided the optimal game balance for Settlers of K'tah. The results showed that simulations are a working method for determining game balance, but playtesting is still necessary for determining overall user satisfaction. In the future, agents that employ different gameplay strategies could be incorporated, as opposed to just the greedy approach.

Saad Salman

Recommender Systems in Confounded Decision-Making Scenarios

The day-to-day decisions of people are often affected by Unobserved Confounders (UCs) which result in uncontrolled variations to human decision making and its outcomes. Although seemingly innocuous, there exists ample evidence that important decisions are subject to the influence of these confounding factors, like two physicians who would choose to treat the same patient in different ways. This may result in poor decision making that escapes detection from any dataset. The recommender system literature lacks research into the formalization of such confounded decision-making scenarios which the present study attempts to provide and assess. In this study, we suggest that the use of artificial recommendation systems will make progress in mitigating confounded decision making. Using word-association quizzes, we construct a reinforcement learning task in which the intuitive answer is always incorrect, and then observe the effect of a recommender system in correcting these intuitions. Performance across a number of recommender policies is discussed alongside implications for decision-assistance software.

Booker Martin

Measuring Human Perceptiveness in Identifying Deep-Faked Images

Generative Adversarial Networks, or GANs, build upon the foundations of deep learning by introducing an "adversarial" / "discriminator" network that contrasts sample data with generated data, pushing the generative network to yield realistic results. With NVIDIA's addition of open source "style transfer" models, programs that generate realistic images are accessible to anyone. This technology creates real-world consequences, such as the abuse of generated facial images online through fake social media accounts. Yet, there is little research that focuses on the human ability to determine whether an image is real or generated and which types of fake images are more convincing. In this work, we document this ability through a survey that identifies how accurately participants can distinguish between generated and real images and which categories of images they are more likely to identify. In addition, the survey asks "cognitive reflection" questions to determine whether each participant relies more heavily on instinct or thoughtful reflection. Finally, participants are prompted to provide relevant demographic information. Our data should provide new insight on human accuracy to spot fakes within specific image categories and any correlations between one's reliance on instinct versus reflection and one's ability to identify fakes.

Kira Toal & Veda Ashok

Challenging Human Instinct: Dynamic Difficulty Adjustment in Video Games

Dynamic Difficulty Adjustment (DDA) is the process of automatically adjusting the parameters of a video game based on the performance of a player to adapt the game's difficulty level to the player's changing abilities, and with the intention of keeping players engaged. Past studies have focused on understanding the design requirements for an effective DDA system in the context of keeping game players engaged and motivated. This study observes if changing the elements of a game's environment after a game player has already acclimated to the game creates a more challenging experience and potentially leads to cycles of adaptation throughout the playthrough. This research examines data collected from players of a 2-dimensional side-scroller game using responses to a post-play survey. The objective of the game is to jump onto moving platforms to (1) stay alive and (2) collect the coins placed on each platform. The game system consists of a metapolicy which chooses between various subpolicies. A subpolicy defines the pattern of the moving platforms presented in the game; more specifically, what the lengths and heights of the platforms will be. The game agent modifies the environment through off-policy reinforcement learning and causal inference in order to challenge game players' natural instincts with unpredictability. The agent evaluates the player's acclimation to a subpolicy based on the player's scores before switching to another subpolicy, and then measuring how the player performed after the subpolicy switch. The results and implications for interactive environments are discussed.

Lucille Njoo, Manny Barreto, Cooper LaRhette, Jenna Berlinberg, Tyler Ilunga, & Masao Kitamura

Briefcase: Intelligent Case Management

Briefcase is an interdisciplinary effort with the LMU Law School's Project for the Innocent. The Project provides free legal counsel to those who believe they have been wrongfully convicted, at which point the team reviews their case to attempt exoneration. The problem: the case briefs are anything but, sometimes 2000+ pages in length and place demands on the small team sorting through them. The Project has over 1000 cases in their queue, and are realistically able to process 30 a year; some convicts may therefore expire before their case has been reviewed, and so the team needs tools to help process these massive trial transcripts. Our elite team of Lucille Njoo and Manny Barreto are developing an app (Briefcase) that will provide Assisted Intelligence solutions to parse, label, annotate, collaborate on, and otherwise help streamline consumption and collaboration on input trial transcripts by the Project. The result may help increase throughput and lead to more exonerations of those wrongfully imprisoned.

Patrick Utz & Mohammad Hayat

Washington Abstract

This capstone-turned-startup app Washington Abstract is aimed at providing legislative transparency and advocacy on recent bills and active politicians at the federal level. My particular research interests are in application of what I call the "Legislative Genome," a feature of Washington Abstract aimed at forming a graphical causal model linking lobbies, politicians, and the bills they support (+ the language they add to those authored). The endeavor is an ambitious one that may take some time to reach fruition, but seems particularly important in the current political climate with growing distrust of representatives and the money behind them. I am working with students / co-founders Patrick Utz and Mohammed Hayat on a growing team that is also supported by Dr. David Choi in the School of Business.

Kira Toal & Veda Ashok

Dynamic Difficulty Adjustment in Adversarial Games

Designing challenging but replayable video games gives players added value in their purchase and experience, but can be a difficult balance to strike as players become acclimated to the in-game obstacles and adversaries they encounter. Adversarial environments in typical video games tend to either be fixed or utilize randomized policies that can be exploited by experienced players. Even machine learning techniques that are overfit to challenge the average player do not necessarily account for players of a particular type (such as those who have quick reaction times), or players who employ particularly exploitative strategies. To combat these limitations, this research aims to build an adversary that understands players' instinctive reactions to game states and uses that understanding to dynamically challenge the player as the game is played. This project attempts to characterize different player types and discover adversarial strategies that directly challenge and adapt to their play style. This is accomplished through a 2D bullet-hell, platformer style video game that employs techniques from reinforcement learning (a variant of SARSA) and causal inference (empirical counterfactuals). This project is being developed by stellar students Veda Ashok and Kira Toal and represents another interdisciplinary pursuit combining animation and computer science.

Lucille Njoo

Harmony: A Self-Harm Prevention App

Harmony represents an interdisciplinary effort between Computer Science and Psychology to provide users with relief and distraction from habits of self-harm. It provides a number of gamified, but research-supported, behavioral-cognitive approaches to allow users to consult resources that will allow the urge to pass.

Michael West

Cognitive Agents in the Iterated Prisoner's Dilemma

The Prisoner's Dilemma (PD) is a simple game that serves as the basis for research on social dilemmas in a variety of fields. It is a two player, general sum game often parameterized by a payoff tuple (R, P, S, T). Each player is given the option of one of two actions: cooperation (C) or defection (D). The payoffs of the actions are in the figure below. The game is modeled such that T > R > P > S and 2R > T + S. From a game theoretic perspective, the best option for each player is always to defect, which results in the socially deficient Nash Equilibrium. A traditional PD is a one round game, but an Iterated Prisoner's Dilemma (IPD) is a sequence of PDs between the same two players. It is often studied to understand the effects of previous actions and the emergence of mutual cooperation. This project examines deployment of new Causal, Counterfactually regret minimizing agents in the IPD to examine their efficacy in an ecology against repeated opponents.

Benjamin Kern

Applications of Superposition in POMDPs

Markov Decision Processes (MDPs) are discrete, mathematical formu- lations used to simulate generalized sequential decision making. An agent acting within an MDP must optimize expected future reward by making a decision on which action to take given the current state. Partially Ob- servable MDPs (POMDPs) are MDPs in which the state is not known by the agent, and instead the agent must act upon observations given by the environment at each time step. This work presents a novel solution that is more sample-efficient than traditional methods for fully-specified POMDPs, viz. when transition probabilities between states as well as observations are given to the agent. Currently, traditional approaches for solving POMDPs require iterative learning processes that converge slowly when not exploiting opportunities for linear parallelism. Consequently, we present a closed-form solution that is derived algebraically from tradi- tional iterative update rules. Solving this closed-form solution yields an accelerated learning rate that enables a jump start unemployed by tradi- tional iterative methods. Simulation results support the efficacy of this method on traditional POMDPs. Additionally, applied and theoretical impliciations of this method are discussed.

Lauren Alvarez
Sofia Ruiz

Bias Clustering for Online Political Articles

This project examines trends of biased language in articles on similar topics across a variety of traditionally polarized news outlets, providing a metric of valence.

Alejandro Zapata

DunGen: Causal Inference in Procedural Dungeon Generation

DunGen provides a Dungeons and Dragons room-layout generator based on a number of preset room types and features, that can then later be tweaked by users with tools that span associational, causal, and counterfactual relations between features.

Juan Neri

Fusions of Deep and Causal Reasoning for Autonomous Poker Players

This project examines the fusion of deep learning networks in detection of facial cue recognition as a component of larger structural causal models, for the purpose of reading players' tells to update belief states in an autonomous Texas Hold'em Poker player.

  PDF / Print