Tag: code smells

  • AI content, with a human face

    AI content, with a human face

    A recent poll on LinkedIn from Alex Ewerlöf asked: “Would you read AI-generated posts and articles that are attached to a human name and picture?”

    Photo by Vadim Bogulov on Unsplash

    The poll offered four choices:

    • Yes
    • Only if the human reviewed
    • Only if the human wrote the draft
    • No

    My first thought was: how would we know how much the human reviewed or wrote?

    I see three scenarios:

    • No clear indication of AI use, but we know, somehow
    • Clear indication of AI use, so we know
    • No clear indication of AI use, and we don’t know AI’s involvement.

    Scenario one: No clear indication of AI use, but we know

    What if the site doesn’t clearly indicate AI use, and we find out because someone let us in on a secret? My gut feeling (ha, one of those human things!) is yes. If the posts appear to be completely human-written but I later find out they are entirely AI-generated, that might feel like a betrayal of trust.

    Would it be, though? Let’s take a specific example: Beyoncé.

    What if the posts appear to be from Beyoncé?

    If the posts use her name or image without her involvement, her lawyers will be in touch.

    But what if she’s having AI write posts on her behalf? Is that okay, or would that be letting the fans down? I mean… famous people already have staff to write stuff for them. Setting aside the question of quality for the moment (we’ll get to that), does it matter if the “staff” is an AI?

    What if the posts do not appear to be from a specific real person?

    Fake name, AI-generated human photo. Nobody’s claiming that the writing is from a specific human. They’re just implying that the writing is from a human. And it isn’t.

    At least, if the posts come from Beyoncé, a real human owns the writing. If she or her people check the content before posting, it has some authenticity.

    I imagine few people are foolish enough to post unreviewed AI content under their real names and photos.

    But if the name and picture are for a person who doesn’t exist, who owns the content? Bogus names and photos misleadingly imply human judgment and experience.

    I suspect plenty of people are willing to post unreviewed AI content (check out the incident of the bogus Halloween parade!) under fake names and photos.

    Scenario two: Clear indication of AI use

    What if a disclaimer on every post clearly indicates, “AI wrote this post, but Beyoncé or her team reviewed it before posting”? I like this less than if they pass it off as Beyoncé’s own writing. I don’t want to know there’s a ghost writer!

    For the fake name and photo, what if a disclaimer on every post reads, “Artie Int is not a real person, the photo and content are entirely AI-generated”? I find that less distasteful. I might not want to read such posts, but that’s the quality question that we keep dodging.

    Let’s stop dodging, because the quality concern is important.

    Scenario three: No clear indication of AI use, and we don’t know

    As far as we know, this is real content – unless there’s something about the content that gives it away. And that something is generally quality.

    I know people claim there are easy ways to tell. “If it uses an em dash, it’s always AI,” people say, offending every human throughout history who has ever used an em dash.

    But if you look up “how to tell writing is AI” you’ll get some telltale signs.

    A sample result from a search for "how to tell if writing is AI" that says this before being cut off at the word unique at the bottom of the photo: AI-generated writing can be identified through a combination of manual checks for specific stylistic patterns and the use of AI detection tools, though no single method is entirely foolproof. 
Manual Identification Clues
Human writing often features natural variations, personal touches, and occasional imperfections that AI struggles to replicate. Key signs of AI authorship include: 
Formulaic and Repetitive Structure: The text may follow a predictable, rigid structure (e.g., an introduction, a few body paragraphs, and a "Conclusion" summary) and repeat the same phrases or ideas in slightly different ways.
Monotonous Tone and Style: The writing can sound formal, stiff, and lack a unique
    AI telling me how I can tell writing is AI.

    How to tell writing is AI:

    • Formulaic and repetitive structure
    • Monotonous tone and style
    • Lack of depth and specificity
    • Unnatural phrasing and vocabulary
    • Inaccurate or outdated information
    • Flawless grammar and spelling
    • Uniform sentence length
    • Use of the em dash

    Okay, just kidding about the em dash, that wasn’t on the list. But as a writer, I learned to avoid all of the rest.

    So, for scenario three, assuming the topic is interesting and the writing is of good quality, I’d read it. I’ll skip bad writing regardless of its source.

    That said, AI is no match for a skilled human writer.

    Code smells

    Developers have a term for similar telltale signs of trouble in code: code smells. A few examples (again, courtesy of AI):

    • Duplicate code
    • Long methods
    • Large classes with too many responsibilities
    • Unused or unreachable code
    • Comments to explain poorly written code

    Here’s the thing: code smells aren’t coding errors. Your code might still work even with all these code smells.

    Likewise, someone’s writing might still be technically correct, even if it is formulaic, monotonous, and unnatural.

    AI does neither writing nor coding well without human intervention. Human experience, wisdom, judgment, creativity, imagination… all of these are assets in both writing and software development.

    They are the key to getting us past “this writing says what I asked for, but it’s lifeless” and “this code does what I asked for, but it’s far from production-worthy.” They are the key to quality.

    In both writing and code, there should be real humans who ensure high quality and take ownership of the results. Not just human-like names and photos.