Hadrien Laporte on the Jealousness of Smarts

There is a form of social illness whose sufferers can not only not tolerate encountering different opinions and ideas in person, but cannot bear the thought that such things might exist unencountered in the minds and conversations of others. If they can’t observe these thoughts and aren’t party to such conversations, how can they demonstrate their moral and intellectual superiority by intervening with The Truth?

– Hadrien Laporte

On the Obsolescence of Search Engines (Part 3)

Part 1
Part 2

After a couple more weeks of playing around with AI (Grok) for search, it’s now hard to go back to DDG or other conventional search engine.

What a godawful waste of time those things have become. The first page of search results for most queries is now almost entirely ads. Followed by results sometimes tangentially related in some way to the query terms, but usually based on “clearly you really meant x when you asked about y” substitutions that probably make someone money somewhere. None of it makes sense, and unlike the old electronic Reader’s Guide to Periodical Literature, sophisticated queries (booleans, requireds, etc.) don’t work reliably or at all anymore.

How the search engines can have steadily reduced the utility of their product over the past fifteen years baffles me, but they’ve all done it, so there must somehow be some financial benefit to doing it (ad revenues from skewing search results, etc.). But Amazon’s on-site search engine does exactly the same thing (perverting even simple queries so that your search always returns umpteen pages of unrelated products from unpronounceable Chinese Alibaba resellers).

Grok is not just more responsive, it’s more efficient, because it’s not returning websites but actual content. And not just content, but digested content, summarized to the degree you request. Along with options to expand on your queries, while using the prior queries and responses to inform suggestions for further exploration and interpretation.

I’m coming to enjoy our new robot overlords.

There is an issue I’ve noticed, however. Grok makes errors. Spectacular errors in some cases. While researching a suitable location for a story on Mars, it argued with me that the coordinates I had picked from a map were not in a crater, only to finally relent and admit that it was wrong, the location was in fact in a crater. Some searches went hilariously sideways until I got it to admit it was creatively embellishing things, at which point I learned to include “vfo” (verified facts only) in certain queries prone to these flights of fancy.

The funniest experiences so far have been with music video searches. “Careless Whisper” came up on a YouTube mix while I was working. Out of curiosity, I asked Grok where the video had been filmed (Miami, it turns out – it had looked like someplace actually exotic to me). It returned a list of where each scene had been filmed – only, the scenes described didn’t fit what was in the video. I asked it about some of the mystery scenes, and it provided detailed breakdowns of sequences that weren’t in the video at all. Even the scenes that were in the video were described entirely differently from how they actually appeared.

The best of these so far has come from asking it to summarize the symbolism of the video for Wardruna’s “Raido“. I was expecting something about the rider freeing himself and the horse from their bonds, growing from their subsequent journey together, finally finding independence and going their separate ways. I was not expecting a vivid description of the rider mounting the horse, journeying to a mead hall, slaughtering a horde of witches, and being carried off to Valhalla while the horse gets a Viking ship funeral. I’m sure that would have been an interesting adaptation, but it has nothing whatsoever to do with Einar Selvik and his horse trotting along a beach and through a meadow together.

I’m actually relieved to find that one can’t fully trust the AIs (at least at this point). It means that they remain an assistant for now, and not a substitute brain we can use to do all of our thinking. Unfortunately, that situation probably won’t last, but by the time they reach that point, Elon will probably have Neuralink advanced to the point where they can be spliced into your skull-meat…

 

Avery Easton on the Blindness of Experts

It disturbs me that it escapes so many ‘experts’ and ‘intellectuals’ that their supposed contributions to human knowledge are not just frivolous or pedantic, but often intellectually destructive, civilizationally corrosive, dehumanizing, and ugly.

What scares me is that there are some who do in fact understand this. And embrace it.

– Avery Easton

On the Obsolescence of Search Engines (Part 2)

(Part 1 here.)

About a month ago, I was searching for something using Duck Duck Go. I wasn’t finding it (unsurprisingly), but there was something in a grey box at the top of the page – an AI interpretation of my query, offering additional information. At first I ignored it, annoyed at yet another intrusive “feature” added to “enhance my search experience” or whatever, but not sufficiently annoyed to add it to my browser’s page element blocking filters with YouTube shorts rows and everything on the Amazon front page minus the search field and account-related links.

I was gradually tempted to pay attention to it, though. And then play around with it. Soon, I found myself using it to do searches instead of the DDG search field, because it was (to my great amusement) doing exactly what I pictured the early form of simulacrum intelligence envisioned for the old fictional universe doing: wading through terabytes of internet trash and the uselessness of the regular search engine to find relevant information.

When I was asked two weeks ago to sign certain business documents, I was tempted to sign up for Grok or ChatGPT to try out their review capabilities (a lawyer friend having raved for the past 2-3 years about doing this in his practice). I didn’t though, thinking that I didn’t have enough time to set up an account and figure out how it worked. It turned out to be easier to use than I expected, but by that time I was already done with said documents.

But that got me to play around with Grok.com a bit more for story research purposes. I was working over this same period on a spreadsheet for estimating the chances of the protagonist of “Beneath a Silent Sky” for making it back to base given the limited supplies he has on hand. I’d already worked out much of it based on research conducted the usual way, plus some reference to college textbooks, but I was struggling with how to not have him have to carry a wheelbarrow full of batteries with him. So, I tried Grok…

…and it took less than an hour to figure out an energy storage method based on near-term technology compact enough to work for this purpose. I then repeated the process on oxygen storage, with similar results. Rather than a wheelbarrow for the batteries alone, the protagonist merely needs a backpack the same size as the one I use for hiking to carry all of the energy and oxygen he requires. (The fact that he has drama-enhancing problems with his consumables along the way is an artistic choice and the result of circumstances, not inherent in either fictional technology.)

Even with an effective search engine (which I doubt exist anymore), I could not have done this. It would have taken me hours of research to find the necessary leads to near-term technology, then more hours of work to figure out how to apply it to the specific instance here, and hours more of calculations and analyses to understand if I could make it work and how to do so. I know this, because that’s how it went with the ~80% of this simulation that was built before I used Grok.

AI (LLM) is doing exactly what I predicted SI would do: replace, then supplement search engines for finding, authenticating, and condensing information found on the internet. And at roughly the point in time where I predicted in the old fictional universe it would do it. The difference being that the “revolution” is happening much faster in the real world than I anticipated: I foresaw it taking about fifteen years to evolve from initial availability around 2020 to something approximating the form and ubiquity of the technology encountered in the stories set around 2050.

 

 

On the Obsolescence of Search Engines (Part 1)

In the prior fictional universe, one of the key technologies was a personal electronic device for communication, computing interface, and information access (plus, for Mars use, some basic safety monitoring of radiation, air quality, and other hazards unique to that environment). I worked out most of the details of this concept in January-May 2002, well before “smartphones” became a thing – the idea originated, as I recall, as a projection of where Palm and other phone-PDA combinations then popular might evolve over the next 50 years.

One key element I added, however, was a form of AI, that I called “simulacrum intelligence” to distinguish it from what we would now call artificial general intelligence or AGI. While this was useful for storytelling reasons, I felt it addressed a problem that I could see on the horizon even in 2002: the erosion of information quality and organization on the internet.

At that time, it was already apparent that people were using scripting/bots to autogenerate garbage information and garbage websites. Scams, yes, in many cases, but in many others just gibberish, the purpose of which I never understood. The obvious extrapolation was that, even with new and evolving tools to filter this crap out, the internet would be increasingly clogged with junk and fraud, noise that would gradually overwhelm signal to the point that it was useless for all but the most pedestrian functions. (In effect, I’d hit on a variation of the Dead Internet Theory years ahead, as well.)

I saw two major outcomes arising from this possibility:

  1. The gradual segmenting of the wide-open, anything goes internet of the time into controlled-access subunits that kept the trash out and prohibited its creation within their individual purviews. The best metaphor I can think of here is having subscriptions to multiple streaming services – you aren’t limited to one, there are many overlapping options, and it’s not a centrally-controlled (i.e. government-run) structure. But of necessity, the variety and novelty of the information you’re able to access through any given service is limited to its specialization, business purpose, etc., and each would have its own priorities when it comes to censorship, I mean “misinformation regulation” – what topics it would permit to be discussed, and what it would not. (Note the similarity of this last point to the mass censorship applied by social media, search engines, commenting tools, and sharing sites, particularly in the 2019-2024 period but continuing in somewhat less-blatant form today – an obvious outcome I did not but really should have foreseen; instead, I foresaw the failure of social media as a whole.)
  2. The emergence of primitive or early forms of AI, initially as a tool to wade through the garbage to find real information and ascertain its reliability and accuracy. “Simulacrum intelligence” wasn’t meant to be true, self-aware AI (and in the old fictional universe was never intended to be), but a research assistant one could use to overcome the shortcomings of search engines. Once invented, however, the foundational technology proved useful in many different areas (much like microprocessors, once invented, having found their way into everything from machine tools to toasters in the 1970s and 1980s) outside of information access.

#2 is really the point of this post. I resisted using any of the emerging AI myself, mainly because of my personal aversion to things I see as trends or fads – there was so much hype about it over the past couple of years that it completely turned me off to the technology (in practice, but not in fiction). In short, I just got sick of hearing about it.

But over the past month or so, while doing technical research for “Beneath a Silent Sky”, I’ve been forced to reevaluate this position. Not because I was warming to the idea of AI, but because the same trend that I predicted back in 2002, and bemoaned around 2018 as having already arrived, has to my eye gone asymptotic over the past year.

Example: I was just now looking for refill leads for one of my engineering pencils. Specifically, 0.35mm refill leads. Which I specified in the search engine query. The first two pages of results were for 0.5 leads, 0.3 leads, art supply stores (without reference to pencil leads, some of which didn’t even carry them), lead blocks for casting, lead testing services, lead detox pills of dubious quality and safety, and the like. I went to Amazon, put in a very specific search query (knowing that Amazon is worse than search engines in this regard), hoping that there I could at least limit the results to office products or art supplies, and again got page after page of results with the most tenuous relationship to the search query – or no obvious relationship at all. (With the added annoyance of most of the products being in fact the same chintzy-quality item from China marketed with the same Chinglish text and identical images by 2-3 dozen different retailers with gibberish names – like every other product on Amazon nowadays.)

A small indication of a very large problem, one I’ll discuss in following posts.

Matthias Adler on Leadership

You can’t be a leader, let alone a good leader or an effective one, if your first instinct is to respond to any input from others by dismissing or demeaning them personally. When you make it clear to others that any suggestion, constructive criticism, complaint, or inspiration will be met with condescension, they will learn that their thoughts are not wanted or welcome, and will oblige your demonstrated wishes by keeping such to themselves – to your detriment, and the detriment of your organization as a whole.

Matthias Adler