5 Comments
User's avatar
Ruv Draba's avatar

Hi Phil,

I'm an information engineer here via Loose Cannon, mainly because I haven't seen a conversation where an information engineer has talked about AI to someone practical, motivated and skeptical with no formal information engineering.

That conversation would seem to me to be the essence of mutual respect in our society as well as critical to the ethics of engineering as a profession, so there's no excuse not to have it. You've talked about the potential benefits of 'volkspublishing', and I think that's one of them.

Yet I don't think that tech journalism is having it -- mainly because I think that tech journalists are largely incapable. Much like sailing journalists, they're packaging a feeling to compete for commercial attention. The audience is expected to follow passively, rather than ask questions. Meanwhile the better science communicators must compete with the 'prophets' (the tech sector has long called them 'evangelists', and the social media sector calls them 'ambassadors') for the same audience attention and there's so much noise that only the hyperbole can be heard.

Call it a fallacy of progressivism, if you like: that the blindly self-interested application of expertise to innovation will naturally improve our social condition. Or call it 'cargo cult' -- the worship of whatever shiny new innovation 'lands' each year, as though from another magical planet. (There's perhaps a parallel in the recreational sailing industry too.)

I think you've reposted these articles because you want a practical conversation about AI function, hype, and potential reach and impact and are frustrated that it's not happening. I think that I may be able to help with that.

For disclosure, I don't work in AI. I work in informatics: the use of information to help us make better decisions. But my PhD was completed in an AI-related field during the Second Wave of AI popularity.

After the First Wave in the 1960s-70s when AI was mainly interesting to theoreticians, the Second Wave was in the 1980s-90s when growing computer power made it interesting to computer scientists. I'd say that we're currently in a Third Wave. Enabled by cloud computing and 'free' Web data, it's now interesting to tech entrepreneurs and tech stock investors for the simple reason that new extractive industries always interest investors -- and I'm in no doubt that like browser search engines and most popular social media, this is another extractive industry.

Meanwhile, I now have clients coming to me asking how AI can help them make better decisions -- which is another way of asking how AI can help them be more intelligent at work.

So: show me the intelligence indeed.

Poke if you'd like to chat.

Expand full comment
Phil Friedman's avatar

Ruv Draba, I agree with you in the main. Just for the record, my own graduate work was in philosophy and logic, with a strong interest in epistemology and philosophy of mind. But the main point of my "Show Me the Intelligence!" series (first published more that seven years ago) is that it doesn't take an engineering or scientific whiz to see through the pile of bull chips disgorged by the Prophets of Ai in the course of chasing the Profits of Ai. It only requires the exercise of a modest degree of intelligence (something which Ai itself lacks) to see through the commercial hype. (BTW, I insist on using the term Ai [lower case 'i'] in preference all other designations, which imply a level of true intelligence that simply does not exist.

.

Currently Ai is nothing more that a stochastic parrot which mimics intelligent discourse, but which is totally unintelligent, selecting potential responses to a query according to a detailed, but still unsophisticated popularity poll.

.

After spending some seven years in the Gulag of Ai skeptics, I've felt encouraged by the emergence of sympathetic souls such as Gary Marcus (here on Substack at https://garymarcus.substack.com/) and Prof. J. Mark Bishop (on LinkedIn at https://www.linkedin.com/in/profjmarkbishop), who wrench the Ai conversation back to the real world.

.

Thank you for reading and commenting so eloquently. I am pleased to make your acquaintance. Cheers!

Expand full comment
Ruv Draba's avatar

Hi Phil,

Thank you for your thoughtful reply. You wrote:

> It only requires the exercise of a modest degree of intelligence (something which Ai itself lacks) to see through the commercial hype.

I strongly support your position on critical thought, but think the over-all analysis is trickier than that.

> Just for the record, my own graduate work was in philosophy and logic, with a strong interest in epistemology and philosophy of mind.

So as a philosopher of mind you're a rationalist, but as a sailor and boatbuilder I imagine that you're an empiricist?

Here's where we'll likely differ then.

I think that we don't have a viable rationalistic definition of intelligence, but we do have a viable empirical one.

Empirically we can define intelligence as effective adaptation to an unfamiliar problem.

This is a useful definition because we can apply it behaviourally -- to animals, for example, and to information systems. We don't need some authoritative theory of mind to understand some key functions of mind and their uses.

Yet it's also a potentially misleading definition because we can get confused over what problems are familiar vs unfamiliar. We have to excavate problem representation (language) to make accurate sense of it. Else we don't know whether adding 3 + 3 is a different problem to adding 2 + 2 -- it might be the same problem for humans, but a different problem for ravens, say.

You're right that skepticism about AI is warranted because it's obsessed with scraping representations and inferring statistical associations. You've rightly said that it's not in itself formal reasoning.

But formal reasoning isn't the pinnacle of human problem-solving, even though we've been taught since the Bronze Age that it is. Formal reasoning is a kind of linguistic accounting and in STEM its main job is to exclude lines of inquiry that are inconsistent or wasteful, given what else we know. Think of it as a leaf-skimmer in a pool filter: it doesn't do the actual filtering, but is great at helping the filter do its job. And being linguistic, it doesn't in itself produce anything reliable, which is why boat-builders don't just draft designs, but also build and test them.

So where we I think we agree: we're over-estimating the intelligence in what AI is currently doing. That's commercially driven by the evangelists and supported by sensationalistic pop science reporting.

Where I think we disagree: even granted that, it's not true that there's no intelligence, and in any case, that's more an objection to the marketing than an appraisal of the tech. In a tech-and-society context, I think that the most salient questions are what it's replacing, how we'll likely abuse it, and what else it's costing.

It's a pleasure to meet you too, Phil. Thank you for the links.

Expand full comment
Phil Friedman's avatar

Ruv - The discussion could go on ad infinitum. I'd only add at this point that nobody in Ai has to my knowledge yet dealt with the ability of human intelligence to form correct (or, at least, correctly predictive) conclusions from inadequate data sets -- like traveling in quantum leaps through space-time wormholes. And that fact only reinforces my skepticism about the level of understanding of human intelligence amongst Ai cheerleaders. Cheers!

Expand full comment
Ruv Draba's avatar

Phil I sympathise with your outrage at the misrepresentations in industry discussion and respect the research and reflection that you've already done.

Yet you may find that a lot of the marketing problems go away if you call it 'Machine Learning' as I do. It's accurate about the techniques, ends cultural interpretation of intelligence, obviates political fights over thresholds of success and helps deflate the sails of the tech-evangelists.

Regarding dealing with partial information, all machine learning must do that because ultimately, all data-sets are partial. ML's predictive accuracy when trained on partial information is literally an integral part of its engineering and assessment (more detail if requested.)

Meanwhile, humans are generally poor at dealing with partial information too -- that's why we publish lists of fallacies, why natural scientists, social scientists and psychologists are so careful with the way they present results, why historians need historiographers to help keep them honest and why serious journalists watch each other like hawks.

Regarding stochastics, our neurology, our evolutionary biology and our world itself are built on stats and thresholds -- it's only our language and cultural traditions that happen to be built from binary trues and falses. So that's a limitation of culture, not the technology.

Perhaps a more concerning question is how much semantic mapping and real-world modelling this tech needs to innovate on things we care about, and how badly it may fall short from the lack. We saw with Covid-19 what happens when humans don't understand epidemiology but think it's okay to argue emergency pandemic policy. Like you I don't think that machine-learning from web-scrapings would improve that, but as a single example, it could easily make things much, much worse.

Yet I realise now that you've reposted these articles for entertainment and site-promotion rather than to foster discussion. I accept them in that spirit then and will leave you to it. You know where I am if you want to chat.

With warm regards, RD

Expand full comment