Image for post
Image for post
Mission Accomplished! The wrong definition of Intelligent Automation can make you think you’re done, when you’re only just getting started.

Intelligent Automation is commonly defined as simply the combination of Robotic Process Automation and Artificial Intelligence.

But a definition like this betrays a technology-first perspective that actually loses sight of the promise of Intelligent Automation and the benefits that people and organizations can receive when that promise is realized.

Intelligent Automation is THE means to accomplish Digital Transformation, which involves the use of digital technologies to become responsive and resilient in a world defined by a rapid and accelerating rate of change (an acceleration that is itself the result of the widespread digital technology adoption). …


Image for post
Image for post

It is remarkable how many people with careers in community management don’t actually have a clear idea of what community is. Without a clear working definition, there is a tendency to simply do some stuff that involves some people, slap the word ‘community’ on what they’ve done, and call it a day. As if simply calling something a community is enough to make it happen. Like magic.

If only community was that easy.

I talk about the concept of community at length in my book, and at some point I’ll write a post exploring the implications of the term and it’s history for technology companies, but for now I’d like to briefly share three things that are frequently called ‘community’ despite not having very much to do with community at all. …


Image for post
Image for post
  • Written in 1920, R.U.R. (Rossum’s Universal Robots) by Karel Čapek is most well-known for having coined the term ‘robot.’ Although derided by many (including Isaac Asimov, who called the play ‘terribly bad’) it anticipates and responds to a Ann portent argument that continues to be used to justify automation projects today.
  • The most common argument for automation (one that is used by almost every vendor) is not new. It dates back to Aristotle who used the same logic to justify using slaves, women, and children in similar ways.
  • The most important contribution of Čapek’s play is not just that it coins a term, but in the work the term does. By deliberately connecting automation with Aristotelian slavery, and then viewing the results through a pragmatic lens, Čapek challenges us to consider the consequences of a technology-centered approach to automation and consider whether a more human approach is possible. …

Image for post
Image for post

One of the biggest problems with ‘digital transformation’ is that everyone use sthe term differently.

At times, ‘digital transformation’ is used to describe a set of social conditions. At other times, it refers to something we have to do. Still at others (and more commonly) it is something we must consume. Simon Chan has lamented that the term has ‘morphed into a bit of a beast. A “catch all” banner for the marketing of any IT related products and services.” But this ambiguity is not something that evolved over time.

It was there from the start.

According to Chan, the term ‘digital transformation’ was first coined by the Capgemini Consulting group in the first edition of its Digital Transformation Review. I read it. Truth be told, I wasn’t expecting anything of substance from a rag like this, but it turns out to be a truly remarkable collection. …


Image for post
Image for post

I’m really interested in how ideas become things with the power to shape reality. My interest is not idle. It’s also not strictly academic (despite the fact that I have written a book on the subject). It comes from a desire to explode hype cycles by working with businesses to understand and address real issues instead of being distracted by secondary anxieties created by marketers and industry ‘experts.’

So let’s talk about ‘digital transformation.’

The language that is most commonly used to describe ‘digital transformation’ makes a crucial mistake. It treats ‘digital transformation’ as a thing. More than a thing, ‘digital transformation’ is talked about as a thing that businesses need and can consume. …


Image for post
Image for post

Our current use of AI in higher education involves automating parts (and at times the whole) of the human decision-making process. Where there is automation there is standardization. Where there are decisions, there are values. As a consequence, we can think of one of the functions of AI as the standardization of values. Depending on what your values are, and the extent to which they are reflected by algorithms as they are deployed, this may be more or less a good or bad thing.

Augmenting Human Decision-Making

An example of how AI is being used to automate parts of the decision-making process is through nudging. According to Thaler and Sunstein, the concept of nudging is rooted in an ethical perspective that they term ‘libertarian paternalism.’ Wanting to encourage people to behave in ways that are likely to benefit them, but not also wanting to undermine human freedom of choice (which Thaler, Sunstein, and many others view as an unequivocal good), nudging aims to structure environments so as to increase the chances that human beings will freely make the ‘right decisions.’ In higher education, a nudge could be something as simple as an automated alert reminding a student to register for the next semester or begin the next assignment. It could be an approach to instructional design meant to increase a student’s level of engagement in an online course. It could be student-facing analytics meant to promote increased reflection about one’s level of interaction in a discussion board. Nudges don’t have to involve AI (a grading rubric is a great example of a formative assessment practice designed to increase the salience of certain values at the expense of others), but what AI allows us to do is to scale and standardize nudges in a way that was, until recently, unimaginable. …


Image for post
Image for post

How should we approach the evaluation of predictive models in higher education?

It is easy to fall into the trap of thinking that the goal of a predictive algorithm is to be as accurate as possible. But, as I have explained previously, the desire to increase the accuracy of a model for its own sake is one that fundamentally misunderstands the purpose of predictive analytics. The goal of predictive analytics in identifying at-risk students is not to ‘get it right,’ but rather to inform action. …


Image for post
Image for post

This is the second in my series on common misunderstandings about predictive analytics that hinder their adoption in higher education. Last week I talked about the language of predictive analytics. This week, I want to comment on another common misconception: that predictive analytics (and educational data mining more generally) is a social science.

I began my college journey as a musician. I played jazz and classical guitar, and received several scholarships in support of a music degree. During my first semester in school, a friend introduced me to the music of Rage Against the Machine … and I had absolutely no idea what I was listening to. I had no real frame of reference, and no way to immediately make sense of it. …


Image for post
Image for post

The greatest barrier to the widespread impact of predictive analytics in higher education is adoption. No matter how great the technology is, if people don’t use it effectively, any potential value is lost.

In the early stages of predictive analytics implementations at colleges and universities, a common obstacle comes in the form of questions that arise from some essential misunderstandings about data science and predictive analytics. Without a clear understanding of what predictive analytics are, how they work, and what they do, it is easy to establish false expectations. …


Image for post
Image for post

In higher education, and in general, an increasing amount of attention is being paid to questions about the ethical use of data. People are working to produce principles, guidelines and ethical frameworks. This is a good thing.

Despite being well-intentioned, however, most of these projects are doomed to failure. The reason is that, amidst talk about arriving at an ethics, or developing an ethical framework, the terms ‘ethics’ and ‘framework’ are rarely well-defined from the outset. …

About

Timothy Harfield

Engaging communities at the intersection of humanism and technology.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store