Skip to the main content
This website uses cookies to give you a better online experience. By using this website or closing this message, you are agreeing to our cookie policy. More information
Alberta Supports Contact Centre

Toll Free 1-877-644-9992

Alert

Government's holiday closure runs from December 24 to January 1, 2025. For emergency supports, please visit alberta.ca or call the Income Support Contact Centre at 1-866-644-5135.

Person using desktop computer to access generative AI software.
A A

How to Use AI Safely and Responsibly at Work

Artificial intelligence (AI) is a very new technology, and it’s far from perfect. Every employee should know the risks of using AI in the workplace and how to protect against them.

The advantages of AI have dramatically changed the world of work. These benefits include increased productivity and efficiency, new tools to help customers, and more motivating work environments.

However, AI is a work in progress. Companies that make AI tools are working to reduce risks, but they have already released versions of AI software that are getting a lot of use.

What are the risks of using AI?

Falsifications or hallucinations

AI tools are only as good as the data they have been trained on, and this data can be wrong. AI has also been found to make up incorrect answers to questions. These are called hallucinations because AI can seem confident in presenting itself as correct when it’s not.

Bias, stereotyping, and discrimination

AI is trained on human-provided data and may have access to the internet. It can pick up biased or discriminatory information and then reflect this bias in its output.

For example, when AI reviews resumés, it can do so based on historical information about workers that does not reflect today’s society and diversity. AI might exclude a candidate because of this learned bias.

Poor-quality results

Generative AI does not always understand context and small differences in what words mean. It can write, but often the text it creates does not sound human. It works by predicting what the next part of its response should be based on the information in its database and what it has been asked to do. Because of these limitations, sometimes the results it produces don’t make sense in the real world.

Privacy and security risks

Generative AI tools constantly learn from the data provided to them. They may store or use private or sensitive data to produce new outputs.

Confidential and sensitive personal, customer, company, and financial information is not secured or protected by AI. This type of information should not be provided to an AI tool. Sharing information with AI may violate privacy legislation or confidentiality agreements.

If confidential, private, or sensitive data is given to AI, it can be accessed by other users, hackers, or malicious actors. These users may use AI tools for cyberattacks and to bypass security measures. They can also use AI to fool people into revealing passwords and other such security information.

Ethical problems

It’s important to remember that AI does not always reflect human characteristics, such as moral and ethical values and a basic understanding of right and wrong. AI developers provide rules to AI systems, but these don’t always work.

Another ethical concern is that AI will sometimes copy or reproduce copyrighted or protected works by authors and artists.

Unintended consequences

AI can demonstrate unexpected behaviours and say or write things that could have negative effects on people, companies, or society.

Governments seek to regulate AI and produce legislation to protect against the risks. The Canadian federal government has proposed the Artificial Intelligence and Data Act (AIDA). This act could regulate the use of AI, building on human rights and consumer protection laws.

How to use AI safely

There are many ways to protect yourself, your employer, and your customers against the risks of AI.

Know your employer’s AI policy

Your employer has likely decided or will decide on rules for using AI in your workplace. The first step to using AI at work is to understand these rules, which are designed to protect you against AI risks.

Never disclose sensitive information

Even if information is “de-identified,” which means that it does not include a name or personally identifying characteristics, it may still be possible to link it to an individual. Sharing this information may still be a violation.

Do not share:

  • Personal information, including names, addresses, and ages
  • Driver’s licences, social insurance numbers, or personal financial information
  • Police checks, medical records, employee records, or salary records
  • Company accounts, customer information, or company financial information
  • Information covered by privacy legislation, confidentiality agreements, and non-disclosure agreements

Understand copyright and intellectual property issues

Be aware that generative AI tools may use copyrighted data owned by other parties, creating a liability risk. If shared with AI, your employer's information may also be at risk of intellectual property violation.

Be aware of hallucinations, bias, and ethical issues

Knowing that AI can be incorrect or “hallucinate,” be biased, or be unethical, you can watch for these problems in AI content or output. Use your judgment or further research to assess accuracy or appropriateness. If you are concerned, you can ask a co-worker or manager for help.

Ask AI tools for sources

Some AI tools will automatically provide citations or sources. When you are using AI, you can also ask for sources.

So, instead of using the prompt “Name the 5 largest companies in Canada,” you can use the prompt “Name the 5 largest companies in Canada, providing sources or evidence to support the answer.”

Use AI inside your company first

Everyone is learning how to use AI tools. At first, you might try using AI “behind the scenes” in your workplace rather than to produce content that customers will see.

Take responsibility for your content

If you are using AI to complete parts of your job, then you need to take responsibility for AI’s work. This means that you should review everything thoroughly and make sure you are as satisfied as if you had done the work yourself.

Keep developing your AI skills

As you learn to use AI and build your skills, you will feel more confident about recognizing and handling the potential risks of AI.

Raise any concerns immediately

If you do have concerns about AI output or behaviour, it’s wise to bring up the problem to a supervisor or manager.

Was this page useful?