Gillian McCann is the co-founder & CTO of Workgrid Software. Gillian is a leading expert in AI and Machine Learning as well as an AWS Hero.
Home
Breadcrumb ChevronBreadcrumb Chevron
Blog
Breadcrumb ChevronBreadcrumb Chevron
Gillian McCann | Getting Started with AI in the Digital Workplace
Podcast
Artificial Intelligence
Digital Workplace

Gillian McCann | Getting Started with AI in the Digital Workplace

Read Time Icon LightRead Time Icon Dark
14 minutes read time
Publish Date Icon LightPublish Date Icon Dark
Published on Sep 20th, 2023
Author Icon LightAuthor Icon Dark
Written by Gillian McCann
Gillian McCann is the co-founder & CTO of Workgrid Software. Gillian is a leading expert in AI and Machine Learning as well as an AWS Hero.

In this episode of The Workgrid, we welcome Gillian McCann, co-founder & CTO of Workgrid Software. Gillian is a leading expert in AI and machine learning; as well as an AWS Hero. We'll explore what AI truly is, discuss the necessity of regulation, and provide practical insights for digital workplace and application leaders looking to introduce AI into their organizations.

Whether you're a tech-savvy professional or just curious about the future of technology, this episode offers a comprehensive look into the transformative power of AI. Don't miss this engaging conversation with one of the industry's most influential voices.

How about you tell us a little bit about yourself and some of the work you’ve been doing over the last few years?

I’m the co-founder and CTO of Workgrid Software. I’ve spent the last few years working on SaaS Cloud Architecture and Conversational AI in the digital workplace. I am also an AWS machine learning hero which is a title awarded by AWS to a small global community of technologists. I have 20 years of experience working in technology and leadership roles but I must confess I do love cloud computing and I do love chatbots.

Chat, AI, LLMs are terms that are everywhere. All the SaaS companies seem to be peppering them into their products and you’ve been working on them for ages now. Could you briefly explain to those who may not know?

AI is not a new technology. It is really the ability for machines or computers to show human-like intelligence in certain tasks but it has evolved rapidly in the last few years which is really down to a couple things:

  • The range of available data and the proliferation of data that’s available out there

  • Highly scalable computing capacity

With these two things together we’re able to build more complex AI models. Like all AI, large language models (LLM) are machine learning models, but they have been pre-trained on extremely large natural language data sets. Think billions, if not trillions of words. The key difference is the size of the training data, which comes from scraping the internet, websites, social media platforms, and academic sources, so the LLMs understand a wide range of language. At their core, they’re generative AI models, which means they’re able to create new content and new ideas over a wide range of media. A key element that data scientists have found is that by increasing the size of language models, AI is more capable of general understanding than has ever been available before. It allows these models to perform well, out of the box, across a wide range of tasks. It has been compared to a swiss army knife of AI. It’s able to do a wide range of tasks, but within limits because they only have the knowledge out of the box that they have been trained on. You might hear things like ChatGPT only knows up to the end of 2021 or 2022, so just be aware of that. Again, the goal of the models is to project or generate text based on the dataset it’s been trained on.

Depending on the type of data an LLM has been trained on, could this come with pros and cons?

I’d say they require a large range of knowledge, right? So, if you wanted to bring your own data so the AI can understand things specific to your company, you’d have to think about how you add that in. What we’re seeing is emerging architectural and product design patterns. How do you bring your own content to the model? How do you leverage this massive model that was built by just refining it slightly? We’ll probably get into this more, but you might have heard about prompt engineering. There are techniques that are evolving so that lots of people can take advantage of these models.

How does the training process of an LLM work?

To take a step back, an LLM is a machine learning model, so there is a cleaning process that happens with the data, it’s just at a much faster scale. But 100%, the data sets that you use must be carefully considered. The more diverse and wide range of sources you provide give a wider understanding to the model itself. It’s trained, and it’s trained over months, so this is where you and I aren’t going to be able to train an LLM, because we don’t have the infrastructure, we don’t have that data. This is why we start hearing about foundational models. You have a core foundation in which products can be built off. This is a very interesting area that is opening and democratizing a lot of capabilities, allowing not just a subset of data scientists to bring that technology to the forefront, but a much wider group.

What is one takeaway you would like digital workplace owners and applications leads to know as they start to think about LLMs?

I think I would say how easy it can be to get started with this technology. It is available and it can offer a lot of capabilities out of the box. I think that AI and LLMs and bringing some of this technology into the workplace isn’t just for super large businesses either. It’s for smaller ones as well. It’s not for the elite or data scientists, it’s accessible for everybody.

AI can potentially embed itself into the workplace. What are some of those ways that you see AI being used in the workplace and the temperature for CIOs?

I still think we are very early, but the thing I love seeing is the AI assistant technology. We’re seeing how a wide range of AI tasks like summarization or sentiment analysis and translation are really just making it into employee’s hands and everyday situations. Then on the prompt engineering side of things, with a bit of training and a bit of knowledge you can really use it as an assistant. Like someone who can really help you with your work. I mean think about it. If you have somebody sitting next to you who has read the entire contents of the internet – the knowledge there could help you structure your ideas. I use it kind of like a buddy when I am pointing ideas back and forth, helping structure thoughts. Summarization to me is a powerful feature. And then coming from an engineering background of seeing the code generation and the buddies in that sort of space, I see a tool that can boost employee productivity. Because I’m interested in conversational interfaces, I think conversational search and just being able to interact and talk with the machine to get what you want. The end goal within Workgrid is to answer questions and get the end data in such a natural language experience that the end user doesn’t care. It’s so simple that it’s magical. A lot of technology happens behind the scenes but with the natural language it’s just that true assistant. I think we’re starting to get there.

So, you don’t feel the sentiment that “the robots are coming to take our jobs”?

I think I’m an optimist at heart, but I am realistic as well. I’ve worked in engineering for 20 years and my job has changed immensely using technology. I was writing java code in notepad and I was running a command line compiler. So, when people say “oh these tools are taking away knowledge” to me it’s an assistant. It helps. But realistically, jobs will change. Some jobs will be made redundant while some jobs will be created. It would be wrong to say things are not going to change. I mean I remember when someone showed me Google. I’m showing you how old I am with these conversations but what I’m really saying is the world evolves and we evolve with it. I do think that human ingenuity and adaptability over millennia just shows that we are capable of moving with the times. I like to be optimistic; it will change roles and it will create new things, but I can’t predict the future so let’s think of the best.

From a product perspective, who’s going to win that race?

There are an immense number of startups which is exciting and on the one hand you see people who never thought they’d be able to build the product and use the technology to help them evolve in that innovation and that’s really exciting. But I do also see a lot of similar startup products that are all quite similar. Maybe the winner in the short to medium term are probably those companies that already have a defined and established customer base and product market fit, they really understand users needs and can understand how the AI capabilities are now readily available to help their existing customer base. I think maybe in the short term it’s more of those companies.

The other advantage they may have is a clear policy around data, security, and privacy already in place that they would extend to cover those AI use cases, where someone just starting out may not have thought that far ahead yet. I think it’s a balance but overall those who can deliver a great user experience and able to internalize data to improve the models will be very helpful.

We’ve seen the explosion of chatbots, and we’ve worked in the space for many years. When you’re presented with the box of a chat it’s like “what do you do?” so there are key design paradigms for conversational interfaces that we shouldn’t throw away like discoverability. It’s not always just a chat – it’s also forms and buttons and different mediums and modalities that fit within the experience. I just think maybe companies in this space already are able to leverage the technology, which is available to everybody, may have that extra foot in the door.

What about tool builders - those putting together these experiences?

Tool builders – people who enable others to build the AI products. We look at Workgrid and our Workshop and that’s really the idea there – to enable others to bring their own models to craft new experiences using AI and connect to different systems. If you can give those tools to other companies to build those experiences, I think that’s definitely an area that will evolve as well.

Longer term, I think all bets are off. Who is going to win? I like to think it’s those with the imagination. Those that think beyond what we have available today, just something that seems magical to people from the consumer perspective would be really at the Jarvis of the world, that sort of just wow moment. But I would love to see real scientific breakthroughs and medical and educational advancements to open up the world of knowledge. I think coming from a very small country, like I’m coming from Ireland, tiny in comparison to the world of the United States, it’s just like being able to advance education in different areas of the world is something I’d like to see and products being built around that.

How can employers and employees begin to look at upskilling themselves to work alongside AI and integrating this into their culture?

From an employer perspective and looking at how to integrate AI into workflows is really to take a step back and look at where are your pain points? Where are the areas you think AI could really assist? It’s not just throwing AI at your employees it’s really trying to think through those use cases and then I think once you have that, and you involve the employee in that whether it’s focus groups to understand how they can improve the employee experience itself, come up with those use cases and highlight implementations in groups, don’t just roll it out all at once, small staged roll outs to build different experiences and experiment quickly just to understand where you think AI might be great here but employees find it helpful in a different area.

Things like summarization, content generation, helps you structure some content like blogs, just various things like that, trying to identify use cases around that is a good place to start. Having some sort of strategy around it.

And be realistic. That’s the other thing. Be realistic about what the outcomes can be here and involve the employee to get that feedback. That should be standard as you build employee experiences. Something that should be taken into consideration.

If we are going to open up and enable employees to experiment with prompt engineering, we should have some structured training process involved.

We keep hearing about the need for AI regulation. What are some of your thoughts there?

When we think about regulation, we need to think about what are we regulating? What are the core concepts here? That’s when you start to hear about things like responsible AI, ethical AI, things like that. Questions like:

  • What sort of data sets are being used?

  • What sort of bias may exist in these models?

  • Who’s being held accountable for if your AI suggest something that could cause harm? Is it the model? The developers who built something on top?

  • Privacy and Data

There’s a wide range of areas but when you look at AI providers, they usually have a responsible AI policy. I think regulation then should be a consistent policy, everyone should be able to talk to these points about transparency. I do think there is a level of regulation or standards that companies should be held to, particularly as you’re buying an AI product, there should be a consistent set of questions in place that can be asked or controls in place that can be assured like a regulatory framework. I don’t believe it’s statutory legislation.

I did spend time reading the UK government policy on this recently. I was just interested to see because again you hear a lot about what’s being suggested. But statutory legislation is far too slow to go through the courts at the speed of the technology. So definitely not an approach that is being taken now, but how can we use regulatory frameworks and things that already exist? Just being able to answer these questions from a product perspective I think is the starting point. They were really emphasizing not to put the burden on the small company. Some of those big companies can kind of say okay we’ve done this so we’re going to put out regulatory guidelines we know we can already meet, where a smaller company might struggle. I think it’s going to be a growing area; something is needed.

I was going to say, I’m not a lawyer, but I actually do have a law degree, it’s just twenty years old and not in this space. It’s just very interesting, I was thinking about this so we can regulate and have framework but who’s going to regulate bad actors? You’re never going to be able to control that really. I think that would be more my worry.

The concept of AI transparency – why is that so important?

What this means is that the processes and logic within a system are explainable and understandable. AI can be applied to a wide range of things, but if you can’t explain it, how can we trust it? As a society, would we want to put our trust in systems like that? It leads to distrust and a reluctance to use things like AI. Again, it comes back to things like datasets being trained and what algorithms are being used so that’s the push for explainable.

On the other side we have proprietary information. Open AI for example, the datasets are not open sourced. We don’t know all the datasets they have put the effort into gathering. We have the open-sourced models which are more visible. So, I think getting that balance right for the legislation or regulatory framework, are they going to insist that it’s open and you do share? That’s the kind of thing to make it more easily understood - how the AI came to that decision that it made.

There’s a lot of research that’s gone into this as well – you know, how can you determine which datasets are making the most impact on the responses and things like that. There’s a lot of research going into now that you’ve built this black box of technology how can you get into the inner workings of it a really understand. That research is really evolving as well.

We touched upon that crawl, walk, run approach to bring AI to the workplace. What’s your perspective on the future of AI and LLMs within the digital workplace over the next few years?

I think we’re going to see, as I call it, the everyday AI. It’s going to be readily available to people – if you need to summarize an article you just ask AI to summarize to bullet points, or draft a blog, or make a case for this… It’s the combination of bringing your own knowledge of the model and bringing it together. I think the assistant side of things is just going to continue to grow.

The other area, although it’s kind of the same area, we hear a lot about AI agents. It’s essentially a chatbot that uses a large language model. So, it’s a conversational interface that uses a large language model to make decisions and essentially perform actions. We are going to see more interactions with third party systems. Your assistant can generate content for you, it can submit request for you, it can look up things. Just more and more systems it can talk to. To me, that’s exciting because you’re getting one conversational interface that can do many things in your daily work. I love the idea of a true assistant you can talk to. How we interact with machines is definitely going to improve and change.

What about the role of the employee as a builder of AI experiences?

More people should become builders because we can make it easier and more seamless. If you can explain what you want, and we can use some of the generative AI and code generation to actually enable those ideas and use cases, we can create more experiences that may not come from the areas of the employee that would have built them in the past. I think going forward and having a staged roll out, and an idea of where you want to go but starting small and incrementally add use cases so you can really see the value from them, would definitely be a recommendation.

You’ve been building a lot of these experiences over the past few years at Workgrid, where the stance is let’s be more proactive in being where the user happens to be instead of being a blank bar on a page. I feel like there is a trust factor involved there where the assistant has to provide you with some sort of value.

The position at Workgrid is to have different types of interactivity. You should be able to ask the system for information, but you should also be able to receive some information as well. It’s a push and pull, which is something I think is important. A good natural language system, at a super high level, should be able to answer questions and sometimes those answers come from different systems. It could be a search-based system, it could be Salesforce, it could be a wide range but to the end user, they don’t care where it comes from they just want the answer. So being able to interact seamlessly with multiple systems, but to also have those multiple systems interact seamlessly with the end client is key – they should provide weekly reports and nudges for activity so it’s like a push and pull of data. At Workgrid we have focused on both, whereas some chatbots focus on just a one way integration. I think the whole assistant space will evolve rapidly.

I think the assistant market will have to collapse, because people aren’t going to want to go to Bot A for one question and Bot B for another. It’s kind of like what search is nowadays.

Exactly, you just want to simply interact. The end user doesn’t care what system it is, so we just need to try to deliver that. User experience is very important, as well as the underlying technology. But, that’s where discoverability and understanding what’s available and when to bring data to the chatbot come into play. Having a structured approach to the experience is a much easier place to start with than having a wide range of open box assistant.

So really we start by looking across your digital workplace, understanding it’s vision, and aligning key use cases to introduce the chat interfaces within the workplace. You could even focus on a specific line of business or domain like IT Help Desk in combination with some of the delighters. I think summarization... That’s pretty powerful. Opening lighter weight use cases to all employees while working on something heavier.

AI created "hallucinations"

I love the term hallucinations when as it pertains to AI. It’s the idea that AI is coming up with things that aren’t accurate. An example I saw on the news, there was a company acquisition that AI had created with a specific amount and everything, But it didn’t actually exist.

The thing is, in this example AI is doing it’s job really well. It’s not a mistake. It is a generative AI model so it’s job is to generate text, right? That’s what it’s doing. But that’s where the prompt engineering comes in. Providing context and guardrails within models to refine the responses, this is a situation where that comes in. When we say prompt engineering that’s like zero inference and we provide examples and things.

So super high level it’s doing what it was programmed to do, it’s just not exactly what people want. It’s important then, when you bring data into a model that is specific to your company or digital workplace, you get citations and links back to documents so you understand where the response came from. Also, from a design principle you need to make it clear AI is being used. That is a core design principle for chatbots really, don’t pretend you are human, and make it the limitations clear upfront. People are more forgiving when they know it’s technology.

What made you give up law and start coding?

I honestly thought it was boring. In university here you make your decision when you’re 17. I just thought, "oh you have good grades, I’ll do a law degree." People may think “oh, she’s a software engineer is she trying to have a laugh?” but to me a software engineering is such a creative role and has been for me over the last 20 years. To be able to build and see things in real time was the wild one for me. I can do this and build it and see it run on a computer whereas the law was a lot more drawn out.

What was your first wow moment when you created something?

The very first thing I programmed was from a PC magazine, I typed out pong as the name of the game and I didn’t even know what I was doing but I was like, "wow look at that." The next one was in my master’s program and you’re going to laugh, it was a website like a 1990s website with all the blinking lights and stuff. Coming forward a good few years now I think the chatbots and the AIs and voice interfaces and the ability to talk to a machine to get it to do what you want that kind of blew my mind a bit. I think this is game changer technology and the more people who use it will probably agree.

This blog was adapted from The Workgrid, a podcast about the digital workplace, technology, and everything in between. For the complete episode, please visit: Getting Started with AI in the Digital Workplace

Loading... please wait

Continue reading

More From the Blog

Want to see Workgrid in action?