Education 2.0 & 3.0
148.6K views | +3 today
Follow
Education 2.0 & 3.0
All about learning and technology
Curated by Yashy Tohsaku
Your new post is loading...
Your new post is loading...
Rescooped by Yashy Tohsaku from Information and digital literacy in education via the digital path
Scoop.it!

Artificial intelligence can deepen social inequality. Here are 5 ways to help prevent this

Artificial intelligence can deepen social inequality. Here are 5 ways to help prevent this | Education 2.0 & 3.0 | Scoop.it
From Google searches and dating sites to detecting credit card fraud, artificial intelligence (AI) keeps finding new ways to creep into our lives. But can we trust the algorithms that drive it?

As humans, we make errors. We can have attention lapses and misinterpret information. Yet when we reassess, we can pick out our errors and correct them.

But when an AI system makes an error, it will be repeated again and again no matter how many times it looks at the same data under the same circumstances.

AI systems are trained using data that inevitably reflect the past. If a training data set contains inherent biases from past human decisions, these biases are codified and amplified by the system.

Via Elizabeth E Charles
No comment yet.
Rescooped by Yashy Tohsaku from Information and digital literacy in education via the digital path
Scoop.it!

How BERT Will Change the Way You Search

How BERT Will Change the Way You Search | Education 2.0 & 3.0 | Scoop.it

Welcome, BERT

Your internet searches are making Google one smart cookie, thanks to artificial intelligence.

 

For quite some time, algorithms have quietly worked their way through search engines, analyzing, and ranking the keywords. This newer search-ranking system is the Bidirectional Encoder Representations from Transformers (BERT). Bert arrived in the search room in October 2019.

 

BERT is the artificial intelligence algorithm designed to understand subtleties in language. The program’s algorithms can discriminate between the use of prepositions like “to” and correctly determine relationships between words and phrases. It reads nuances.


Via Elizabeth E Charles
No comment yet.
Rescooped by Yashy Tohsaku from Information and digital literacy in education via the digital path
Scoop.it!

the bigot in the machine –

the bigot in the machine – | Education 2.0 & 3.0 | Scoop.it

The New York Technical Services Librarians, an organization that has been active since 1923 – imagine all that has happened in tech services since 1923! – invited me to give a talk about bias in algorithms. They quickly got a recording up on their site and I am, more slowly, providing the transcript. Thanks for the invite and all the tech support, NYTSL!

The Bigot in the Machine: Bias in Algorithmic Systems

Abstract: We are living in an “age of algorithms.” Vast quantities of information are collected, sorted, shared, combined, and acted on by proprietary black boxes. These systems use machine learning to build models and make predictions from data sets that may be out of date, incomplete, and biased. We will explore the ways bias creeps into information systems, take a look at how “big data,” artificial intelligence and machine learning often amplify bias unwittingly, and consider how these systems can be deliberately exploited by actors for whom bias is a feature, not a bug. Finally, we’ll discuss ways we can work with our communities to create a more fair and just information environment. 


Via Elizabeth E Charles
No comment yet.