Can Bias be Built into AI?

We asked the question to AI itself:

“Can bias be engineered into you?”

No, bias cannot be engineered into me as an AI language model, such as GPT-3.5. Bias in AI typically arises from the data used to train the model and the way it is trained, rather than being explicitly engineered into the model.

Bias in AI can occur in several ways:

  1. Data Bias: If the training data used to build the model contains biased or unrepresentative examples, the model can learn and perpetuate those biases. For example, if the training data contains gender or racial bias, the model may produce biased outputs related to those topics.
  2. Training Process: The way a model is trained, including the choice of loss functions, data preprocessing, and fine-tuning methods, can also introduce bias unintentionally. Developers need to carefully design and monitor the training process to minimize bias.
  3. User Inputs: Biased or inappropriate inputs from users can result in biased responses from the model. Developers can implement filtering mechanisms to prevent certain types of biased content from being generated.

It’s important to note that efforts are made to reduce bias in AI models during development, and many organizations are working to improve the fairness and transparency of AI systems. However, complete elimination of bias is a complex challenge, and it requires ongoing research, development, and ethical considerations.

Engineers and developers should be aware of potential biases and work to mitigate them through data selection, model design, and continuous evaluation. It’s a shared responsibility to ensure that AI systems are as unbiased and fair as possible.

With this information in hand we can understand that AI language models can have “filters” developed into them which prevents them from giving certain answers in the first place, such as stock market predictions.

Data Bias is of particular importance to us because the entire body of human language in the world is, unsurprisingly, humanistic. That is, collectively all the human language of the world is biased toward the “betterment of humanity” and not the expansion of the Kingdom. Kingdom language, though it has spread worldwide, occupies only a tiny fraction of human language in the 21st century. A good illustration of the bias of human language might be seen in the story of Jesus healing 10 lepers in Luke 17:11-19. Of those 10 profiting from the power of Christ’s Kingdom, only one returns to give thanks. As a model, this would teach that 90% of the time language is not thankful to God, but humanistic, selfish, or even narcissistic. Thus an AI language model will “direct” people in just such a path if people use it to find their path. For the use of logic, order, math, AI language models may be the most powerful tool the Church has had yet in unfolding the Logos–a word from a root that means to put in order, arrange.

Liked this? Please consider supporting the realbible project on Patreon!