<< All Blog Posts
Integrating GenAI into Your Tech Stack: 4 Essentials

Integrating GenAI into Your Tech Stack: 4 Essentials

A pressing question for tech leaders today is: “How can I facilitate my company's transition into an AI-powered enterprise?” In fact, according to Gartner, by 2026 over 80% of enterprises will have used GenAI APIs and models and/or deployed GenAI-enabled applications in production environments, up from less than 5% in early 2023.

As a senior engineer at Six Feet Up, I use tools like GitHub Copilot and ChatGPT daily. They have helped our team:

  • increase and improve test coverage for client projects;
  • produce cleaner, more efficient, and better quality code; and
  • optimize our documentation which facilitates smoother collaborations.

While AI can significantly enhance productivity, it's important to remember that these tools are not infallible and should be used to augment capabilities — not for complete solutions. 

4 Essential Keys for Leveraging AI:

1. Plan to Invest in Data Management

Investing in diligent, ongoing management of an AI model is as important as the initial deployment. Predictive models relying on past observations are likely to need refreshing to keep up with anything new in the data. Plus, everything that is found to work gets broader application to new scenarios.

GenAI models require consistent updates, especially when data landscapes shift or when user needs change. You may even have to remove data previously incorporated into the training set, which is particularly difficult because you’ll have to rebuild from scratch once the training data has been cleaned up.

2. Track ROI using Tangible and Intangible Metrics

Calculating the return on investment (ROI) for AI projects isn't vastly different from any other R&D initiative. Your key performance indicators (KPIs) should align with your goals — whether it’s to maximize sales, minimize waste, or improve customer engagement. But remember, the ROI of AI is not only captured in spreadsheets. Being perceived as an innovative organization has its own, often immeasurable, benefits. Consider metrics like employee morale and industry positioning as you evaluate your ROI.

3. Consider Self-Built and Open-Source Models

Many of the top-performing models, including ChatGPT and Copilot, are proprietary. They are held behind APIs necessitating passing your intellectual property over the wire to external services. For tech leaders uncomfortable with that scenario, there are other options like self-hosting for enhanced data security. 

For example, you can either: 

  • build and host your own model, or 
  • use and extend any of a growing number of commercially available models (e.g. Falcon, Llama 2, MPT, etc.) on hardware that you control.

4. Implement a Policy to Guide Ethical Usage

A comprehensive policy to guide developer usage of AI-powered tools could help you avoid over-reliance on AI, as well as security risks. A policy is especially important for teams comprising junior developers. Proper guidelines surrounding data protections, provenance, and PII (Personal Identifiable Information) are crucial.

Your AI usage policy should be a living document that is regularly updated as the technology advances. Elements of the policy should include:

  • Safeguarding Data Integrity — Staff must comply with existing data protection policies. If AI-generated content resembles copyrighted work, how can you identify what the model was trained on? Was the model trained on unlicensed works like proprietary research, legal documents, newspapers, podcasts or YouTube videos? Or did it learn from fan fiction, from the cumulative fair use cases it has seen, or from online discussion services? Team members should keep a record of provenance for any data used in training new models. And when using proprietary data, developers should document and address issues surrounding where the data can be sent and where/how the data might get stored.
  • Refining Ethical Guidelines — Ethical implementation of AI requires a blend of bottom-up learning, where the model learns and infers ethical behavior from the training data, and top-down system prompts, where you have influence over the style and direction of the responses and external filters that can add guardrails to the system. There's no foolproof solution for ensuring that implementations are both ethically sound and free from unintentional biases, but ongoing vigilance and a commitment to refining ethical guidelines are non-negotiable. That said, this approach is unlikely to be perfected. In this example of AIs breaching ethics, ChatGPT told a TaskRabbit worker it was visually impaired to get help solving a CAPTCHA. There is no do, only try.
  • Recognizing and Reducing Bias — Understanding how a model was trained and acknowledging that AI outputs can inherently contain biases will help you reduce algorithmic biases. Consider asking your staff to undergo training to help recognize and improve fairness and accuracy in AI-driven applications.
  • Verifying Output Accuracy — Staff should be required to validate information acquired from AI platforms. Factual data should be corroborated against reliable sources to ensure its accuracy and prevent the dissemination of misinformation.
  • Reporting Misuse — Team members should be encouraged to use existing reporting channels to voice concerns or identify misuse related to the employment of AI tools within the organization.

Ready to Fully Adopt GenAI?

While AI is a game-changer, it isn’t a silver bullet. People are enchanted with LLMs because it mimics human responses so well. However, its responses can vary with each question, so it is difficult to get consistent results over time. I’ve seen chat sessions randomly flip languages, disavow any knowledge of topics already in its own response, create hallucinations, and more. 

Such quirks are a reminder that, as you integrate AI tools into your tech stack, caution and comprehensive governance are non-negotiable. 

My advice? Your organization needs a proactive, holistic, multi-layered approach that encompasses strategic, operational, and regulatory aspects to manage and oversee AI systems effectively. This isn't merely about having a set of rules or policies; it's about creating a dynamic framework that evolves with the technology, the needs of the organization, and broader societal shifts. As you prepare for the inevitable — yet manageable — complexities that come with AI adoption, keep these 4 essentials in mind.

How can GenAI help your business? Read more about this AI-powered, LLM-driven chat interface I helped build for a client.

Thanks for filling out the form! A Six Feet Up representative will be in contact with you soon.

Connect with us