The widespread use of large language models such as ChatGPT, LLaMa, and LaMDA has the tech world wondering whether data science and software engineering jobs will at some point be replaced by prompt engineering roles, rendering existing teams obsolete. While the complete obsolescence of data science and software engineering seems unlikely anytime soon, there’s no denying that prompt engineering is becoming an important role in its own right. Prompt engineering blends the skills of data science, such as a knowledge of LLMs and their unique quirks, with the creativity of artistic positions. Prompt engineers are tasked with devising prompts for LLMs that elicit a desired response. In doing so, prompt engineers rely on some techniques used by data scientists, such as A/B testing and data cleaning yet must also have a finely developed aesthetic sense for what constitutes a “good” LLM response. Furthermore, they need the ability to make iterative tweaks to a prompt in order to nudge a model in the correct direction. Integrating prompt engineers into an existing data science and engineering org therefore requires some distinct shifts in culture and mindset. Read on to find out how the prompt engineering role can be integrated into existing teams and how organizations can better make the shift towards a prompt engineering mindset.
While testing is a crucial part of every well-designed engineering process, it is especially important for prompt engineers as their role essentially revolves around iterative refinement. The workflow of a prompt engineer often looks something like
- Create a prompt
- Test the model’s response to the prompt
- Refine the prompt
This differs from data science, where the focus is much more on exploration and generating support for a hypothesis grounded in data, as well as software engineering, where the focus is on building and designing large-scale systems. Organizations that want to make room for prompt engineering should establish a culture of testing that supports the work that prompt engineers do. By getting the entire organization on-board with testing, barriers to effective work are minimized and communication is eased.
Promote a process-centered mindset
When working with prompt engineering, it’s important to monitor and understand the different business processes within which ML models are operating. Because an LLM is often just one in a chain of models, the effects of different model outputs, as controlled by prompts, can propagate downstream and alter the outputs of other models as well as broader business processes. Many engineering cultures operate with a product focus – hence the integration of product managers, designers, etc. within a team. Data science, despite being fundamentally a research oriented discipline, still conducts research with an eye towards improving an aspect of the business and its products, so the product-centric mindset is still a key focus. When making the shift towards prompt engineering, the process becomes front and center. Since LLMs are often not user-facing, but geared towards automating some internal business process that has a multitude of downstream implications, cultivating a process mindset becomes essential. Some concrete ways to establish a process-oriented culture include emphasizing the importance of model monitoring, focusing on testing (see above), even for tools that aren’t user-facing, and advocating for prompt engineers to adopt a wide view, leading to an understanding of the entire workflow in which an LLM operates.
Build a culture of creativity
Perhaps the most distinctive way in which prompt engineering differs from data science is in its creative spirit. While data science and software engineering often incorporate elements of creativity – for example, in coming up with new model architectures or considering different system designs – they are still constrained by the limitations of hardware and statistics. Prompt engineers, on the other hand, can get as creative as they want in how they design their prompts and often need to! One common strategy used in prompt engineering involves asking the LLM to adopt a persona in order to complete a task. For example, the prompt engineer might tell the LLM to act as if it’s a Shakespearean pirate, penning a letter to his long lost love or, more mundanely, act as if it’s an accountant giving tax advice to a customer. Sometimes changing the persona that the LLM is asked to embody can improve results. Similarly, a prompt engineer might need to engage in a longer-form dialogue with the LLM in order to nudge it towards a solution. Even still, the responses from an LLM might not be exactly what the user is looking for. In this case, prompt engineers will often draw up more structured templates and allow the LLM to, quite literally, fill in the blanks. In other instances, prompt engineers engage the model in a process of zero-shot learning. They provide the LLM with a small collection of training examples up front, within the prompt, and ask the model to generalize those examples to new instances. A classic example of this can be found in Open AI’s GPT API documentation in which they give an example of prompting a model to generate representations of movie titles using only emojis.
Creativity is difficult to even define and even tougher to cultivate. However, there are general guidelines that serve well in building a culture of creativity. Generally creative people, and prompt engineers more specifically, thrive in an environment with less direct management and greater freedom. Also key is giving open ended objectives and broad guidelines while allowing prompt engineers the space to exercise their creativity with regards to the details. Finally, encouraging creative expression in other areas, such as personality, dress, and work environment can allow creative individuals to open up and work in the way that best suits them.
Seek out similarities and enable experimentation
While it might feel like data scientist, software engineer, and prompt engineer are highly disparate and siloed roles, they actually share a lot in common, and creating a culture that integrates and supports all three need not require a tremendous overhaul of existing structures. First of all, each of these roles is somewhat interdisciplinary. The data scientist should know something of the software engineer’s work in order to assist in integrating their models into existing software infrastructure. Similarly, they should understand the goals of the prompt engineer in using their LLMs, even if they may not have the exact roadmap on how to get there. Software engineers sit directly at the interface of the other two roles, as they develop ways to serve models in software and get the prompt engineers prompts to the models. Finally, while the prompt engineer will likely not be as technical as the data scientist and the software engineer, they should know something of LLM architecture in order to better design prompts and something of software engineering in order to understand how their prompts will integrate into the large software process and structure.
All three disciplines also involve varying degrees of testing and experimentation. Data scientists are probably the most familiar with engaging in rigorous A/B tests and designing structured experiments to validate (or discredit) their hypotheses. Software engineers often need to experiment with different architectures before settling on one that works. And prompt engineers often engage in less rigid, yet still valuable, experimentation when tweaking their prompts to get the desired result. Enabling a culture of experimentation and being encouraging of making mistakes on the way to success is an easy step any organization can take and one that will yield immediate benefits for all three roles.
Ultimately, the goal of prompt engineering is to empower LLMs to achieve better-than-average results, and the prompt engineering role introduces unique cultural needs and strategies. Organizations looking to capitalize on prompt engineering should focus on key value-adds such as testing, monitoring, and process-centered business mindsets, thereby enabling prompt engineers to do their best work by extracting optimal value from LLMs.
We'd love to hear your thoughts! Feel free to contact us if you want to discuss how your organization can be prepared to overcome the cultural shifts from data science to prompt engineering.