The role of the prompt engineer is one that didn’t exist until the emergence of generative AI. Their function is to engineer the best ways to query a generative system to obtain a desired output. For instance, they would query a text-to-text system such as ChatGPT with a prompt like “write me a rap song using references to SpongeBob SquarePants“, so that the system outputs the desired lyrics; or they would query a text-to-image system like Midjourney with a prompt like “a labrador dog in a space suit on the moon, Cubism”, to obtain an image like the one shown in the figure below.

The prompt engineer got their engineer status because of the techniques required to develop in order to obtain meaningful results from a system of the complexity and -to some extent- unpredictable behaviour of the generative systems which are around nowadays. The situation is such that you might come across blog posts, cheat sheets (like the one shown in the figure below), an even online courses devoted to train the aspiring prompt engineer on how to produce the best prompts for this or that system.

Proportions kept, prompt engineering feels a bit like querying an oracle from ancient times. (Proportions and degree of assertiveness kept, that is.) Back on the old days of Ancient Greece, an oracle would respond to a supplicant’s query by quote-on-quote consulting the gods, interpret their answer and pass it to the supplicant. Oracles acted as mediators between the divine and the mundane, trying to make sense out from the complexity of natural events and human affairs. Prompt engineers are trained to curate and refine queries, and even to choose the right wording when querying a generative-AI, so that the system’s output is within the limits of what was expected. Their work is yet another example of man adapting to the machine. True artificial intelligence -in my opinion- should be the other way around, namely, the machine adapting to man, as I argue in this blog post.
The work of the prompt engineer might seem a bit arcane (worth of cheat sheets and online courses endorsed by Andrew Ng) but it’s only because generative-AI systems are not good at explainability, that is, at giving the reasons why its output looks the way it does. If we were dealing with a human being instead of a generative-AI. If we gave the prompts described above to a human being, say for instance the first one to a writer and the second to an artist, even if we didn’t get the desired results on their first trial, we would be able to give concise directions, make use of analogies or physical descriptions, or make use of existing work to our aid in order to converge to the desired result. Most importantly, the human on the other side of the table would be able to explain the choices they made, and also would be able to understand our motivations. These features, explainability and alignment (or empathy) are still lacking in modern AI systems.
Most probably, prompt engineering is one of the jobs that will disappear as our AI gets better, as our interaction with AI systems resemble more the interaction with another human being, and as the algorithms and components that make up these systems are more akin to natural intelligent systems. Maybe one day prompt engineers will be a thing of the past, just like oracles.