The basic idea of a transducer is that it turns one sort of quantity, its inputs, into another, its outputs. The general case to consider is one where both inputs and outputs are time-series, and the current output is a function not just of the current input but of the whole history of previous inputs (when the output is a ‘functional’ of the input series). One way of representing this is to say that the transducer itself has internal or hidden states. The output is a function of the state of the transducer, and that hidden state is in turn a function of the input history. We don’t have to do things this way, but it can potentially give us a better handle on the structure of the transduction process, and especially on the way the transducer stores and processes information—what (to anthropomorphize a little) it cares about in the input and remembers about the past.
Finite-state transducers are a computer science idea; they also call them ‘sequential machines’, though I don’t see why that name wouldn’t also apply to many other things they study. In this case both the input and the output series consist of sequences of symbols from (possibly distinct) finite alphabets. Moreover there are only a finite number of internal states. The two main formalisms (which can be shown to be equivalent) differ only on whether the next output is a function of the current state alone, or is a function of the current state and the next input. While you can describe a huge chunk of electronic technology using this formalism, CS textbooks are curiously reluctant to give it much attention.