Although you can trace many of the general principles behind them back hundreds of years, synthesisers in the sense that we usually refer to them – instruments that create sound electronically – have only been around since the mid-20th century. In that time, somewhere around 60 or 70 years depending on what you count as the first true synthesiser, the technology used has come on in almost unimaginable leaps.
On one level, the evolution of synthesiser technology goes hand-in-hand with advances in the wider world of tech. Bob Moog’s first commercial synthesisers were launched into a world where computers were only just moving from vacuum tubes to integrated circuits, five years before the first moon landing. The decades that followed saw rapid advances in things like computing power, miniaturisation, speaker, screen and interface design, all of which have impacted the design of hardware and, latterly, software synthesisers. In the 2020s, the cutting-edge of synthesiser design is occupied by many of the same trends as wider consumer technology: wireless and portable design, cloud connectivity, machine learning and the potential of artificial intelligence.
There is something unique about the realm of synthesisers though, in that users and designers alike maintain a misty-eyed attachment to the designs and technologies used in those earliest commercial synths. This is analogous to cinema or photography, where certain practitioners stringently persist with working practices rooted in film, despite the obvious convenience of digital photography, or music listening, where the resurgence of vinyl flies in the face