Google and Microsoft are on a mission to remove the drudgery from computing, by bringing next-generation AI tools as add-ons to existing services.

On March 16, Microsoft announced an AI-powered system called Copilot will soon be introduced to its 365 suite apps including Word, Excel, PowerPoint, Outlook and Teams.

The news got here about two days after Google published a blog explaining its plans to embed AI into its Workspace apps resembling Docs, Sheets, Slides, Meet and Chat.

Collectively, thousands and thousands of individuals use these apps every day. Bolstering them with AI could provide a significant productivity boost – so long as security isn’t an afterthought.

The advent of generative AI

Until recently AI was mainly used for categorisation and identification tasks, resembling recognising a number plate using a traffic camera.

Generative AI allows users to create recent content, by applying deep-learning algorithms to big data. ChatGPT and DALL-E, amongst others, have already taken the world by storm.

Now, Microsoft and Google have found a more concrete option to bring generative AI into our offices and classrooms.

Like other generative AI tools, Copilot and Workspace AI are built on large language models (LLM) trained on massive amounts of knowledge. Through this training, the systems have “learned” many rules and patterns that may be applied to recent content and contexts.

Microsoft’s Copilot is being trialled with just 20 customers, with details about availability and pricing to be released “in the approaching months”.

Copilot will probably be integrated across apps to assist expedite tedious or repetitive tasks. For example, it would:

  • help users write, edit and summarise Word documents
  • turn ideas or summaries into full PowerPoint presentations
  • discover data trends in Excel and quickly create visualisations
  • “synthesise and manage” your Outlook inbox
  • provide real-time summaries of Teams meetings
  • bring together data from across documents, presentations, email, calendar, notes and contacts to assist write emails and summarise chats.

Assuming it executes these tasks effectively, Copilot will probably be an enormous upgrade from Microsoft’s original Office Assistant, Clippy.

Google’s Workspace AI will offer similar capabilities for paying subscribers.

What’s under the hood?

Microsoft described Copilot as a

sophisticated processing and orchestration engine working behind the scenes to mix the facility of LLMs, including GPT-4 […].

We don’t know specifically which data GPT-4 itself was trained on, just that it was loads of data taken from the web and licensed, based on OpenAI.

Google’s Workspace AI is built on PaLM (Pathways Language Model), which was trained on a mixture of books, Wikipedia articles, news articles, source codes, filtered webpages, and social media conversations.

Both systems are integrated into existing cloud infrastructure. This means all the info they’re applied to will already be online and stored in company servers.

Microsoft says users’ content and prompts won’t be used to coach the Copilot AI.
Shutterstock

The tools will need full access to the relevant content to be able to provide contextualised responses. For instance, Copilot can’t distil a 16-page Word document into one page of bullet points without first analysing the text.

This raises the query: will users’ information be used to coach the underlying models?

In relation up to now, Microsoft has said:

Copilot’s large language models usually are not trained on customer content or on individual prompts.

Google has said:

[…] private data is kept private, and never utilized in the broader foundation model training corpus.

These statements suggest the 16-page document itself won’t be used to coach the algorithms. Rather, Copilot and Workspace AI will process the info in real-time.

Given the push to develop such AI tools, there could also be temptation to coach such tools on “real” customer-specific data in the longer term. For now, nevertheless, it seems that is being explicitly excluded.



Usability concerns

As many individuals noted following ChatGPT’s release, text-based generative AI tools are liable to algorithmic bias. These concerns will extend to the brand new tools from Google and Microsoft.

The outputs of generative AI tools may be riddled with inaccuracies and prejudice. Microsoft’s own Bing chatbot, which also runs on GPT-4, got here under fire earlier this 12 months for making outrageous claims.

Bias occurs when large volumes of knowledge are processed without appropriate selection or understanding of the training data, and without proper oversight of coaching processes.

For example, much of the content online is written in English – which is probably going the major language spoken by the (mostly white and male) people developing AI tools. This underlying bias can influence the writing style and language constructs understood by, and subsequently replicated by, AI-driven systems.

For now, it’s hard to say exactly how problems with bias might present in Copilot or Workspace AI. As one example, the systems may simply not work as effectively for people in non-English-speaking countries, or with diverse sorts of English.



Security concerns

One major vulnerability in Microsoft’s and Google’s AI tools is they may make it much easier for cybercriminals to bleed victims dry.

Whereas before a criminal could have needed to trawl through a whole lot of files or emails to seek out specific data, they’ll now use AI-assisted features to quickly collate and extract what they need.

Also, since there’s to this point no indication of offline versions being made available, anyone wanting to make use of these systems can have to upload the relevant content online. Data uploaded online are at greater risk of being breached than data stored only in your computer or phone.

Finally, from a privacy perspective, it’s not particularly inspiring to see yet more avenues through which the most important corporations on the earth can collect and synthesise our data.



This article was originally published at theconversation.com