The use of algorithms hold promise for overcoming human biases in decision making. Companies and governments are using algorithms to improve decision-making for hiring, medical treatments, and parole. Unfortunately, as with humans, some of these algorithms make persistently biased decisions, functionally discriminating people based on their race and gender. Media coverage suggests that people are morally outraged by algorithmic discrimination, but here we examine whether people are less outraged by algorithmic discrimination than by human discrimination. Six studies test this algorithmic outrage asymmetry hypothesis across diverse discrimination in hiring practices (sexism, ageism, racism) and across diverse participant groups (online samples, a quasi-representative sample, and a sample of tech workers). As predicted, people are less morally outraged by algorithmic discrimination. The studies further reveal that this algorithmic outrage asymmetry is driven by the reduced attribution of prejudicial motivation to machines. We also reveal a downstream consequence of algorithmic outrage asymmetry—people are more likely to endorse racial stereotypes after algorithmic discrimination versus human discrimination. We discuss the theoretical and practical implications of these results, including the potential weakening of collective action to address systemic discrimination.