AI Workforce

What a $30 Vinyl Record Tells Us About the Value of Humans in the AI Era

Five months ago I started buying records again. One hundred and twenty-five records later, I have read fourteen behavioral-economics studies and noticed the same psychology that explains why people pay for vinyl explains why companies should value their humans as AI capability accelerates. The maximalist version of the one-person, billion-dollar company prediction is not where this ends up.

By Harrison Painter May 3, 2026 Updated May 3, 2026 14 min read
Harrison Painter holding the open KISS Alive II gatefold album, with the band-on-stage fire imagery dramatically lit by orange and blue stage-style lighting
The first record I bought as a kid was KISS Alive II. The first record I bought again at 55 was an original Terre Haute pressing of the same album.

Behavioral economics shows people consistently pay more for physical goods than digital, prefer humans for symbolic and identity-laden work, and accept AI for objective and utilitarian work. A field experiment with more than six thousand customers showed that disclosing chatbot identity dropped purchase rates by 79.7 percent on identical-quality outcomes. The composition that works pairs AI for utilitarian work with humans for symbolic and relational work. The 7 Levels of AI Proficiency is the operational instrument for making that composition decision visible.

The vinyl puzzle

Last Thanksgiving I drove east through Indianapolis to a record store. The day was overcast in that way late November can get in central Indiana, the sky the color of cement, the parking lot half empty. I went in. I walked the bins. I came out with a copy of KISS Alive 2.

I held it on the way to the car.

The copy was an original Terre Haute pressing, made about an hour and a half west of where I was standing in 1977. Same gatefold. Same picture of the band on stage with the fire and the fireworks. The record was the first one I had ever bought as a twelve-year-old kid. It was now the first one I had bought again at fifty-five.

I had been working alongside AI agents for the better part of a year. I was burned out in a way I had not been burned out before. I needed a month off the screens. Not off work. Off the part of work where the next decision lives inside a chat window. So I did the only thing that felt like the opposite of that. I bought a record I could stream for almost free, on a phone in my pocket, anywhere I wanted, at no marginal cost.

Five months later, I have one hundred and twenty-five records. I have paid as little as five dollars for some and almost a hundred for others. I play them on a vintage stereo I tracked down on Facebook Marketplace, the dream system I had wanted as a kid and never owned. Most Sundays my wife and I sit and listen to four or five sides, no phones in the room. The record stores in my regular rotation are Karma Records, Indy CD & Vinyl, Luna Records, and Take Care Records. I committed to one used and one new record per month, supporting younger artists like Geese, Ratboys, Courtney Barnett, and The Lone Bellow alongside rediscovering Paul McCartney and Wings, early Deep Purple, Miles Davis, and Art Blakey.

(Other writers have noted vinyl's revival as a metaphor for human creativity in the AI era; this piece extends the parallel into business and workforce, with the underlying behavioral science attached.)

The puzzle is the math. I am paying real money for a harder-to-store, slower-to-access version of music I can have for almost free. I am not alone. The Recording Industry Association of America has reported vinyl revenue exceeding CD revenue every year since 2020, and 2024 marked the eighteenth consecutive year of vinyl sales growth. None of these buyers are confused about the existence of Spotify.

And the puzzle gets stranger. Industry surveys have reported that close to half of vinyl buyers do not own a turntable. People are buying records they cannot listen to.

So why?

Behavioral economists have actually run this experiment. Ozgun Atasoy and Carey Morewedge, working at Boston University, wanted to know whether the physical form of an identical good changed how much people valued it. They ran five experiments in the Journal of Consumer Research. Same content, same artist, same album, same movie. One version was digital. One version was physical. People paid more for, were willing to pay more for, and were more likely to purchase the physical versions every time. The premium did not come from any technical advantage of the medium. It came from being able to hold the thing.

Jon Pierce, Tatiana Kostova, and Kurt Dirks have spent two decades inside the related construct, called psychological ownership. They have argued that people experience a thing as theirs when three conditions show up: control over it, intimate knowledge of it, and investment of self into it. A vinyl collection earns all three. You choose the records. You learn each pressing. You spend time and money on the system. Your shelf becomes a record of who you have been listening to for years.

Joann Peck and Suzanne Shu wanted to know whether merely touching an object increased a person's sense of owning it, before any purchase. Their Journal of Consumer Research paper showed it did. Touching a thing changed how a person felt about owning it.

And then there is the IKEA effect, named by Michael Norton, Daniel Mochon, and Dan Ariely in 2012. People place a higher value on things they had a hand in making. The effect held only when the labor produced a successfully completed object. Failed labor did not increase value.

Pulling a record out of its sleeve. Cleaning the surface. Setting it on the platter. Lowering the needle. The micro-ritual is fifteen seconds of effort that produces nothing Spotify cannot beat in zero seconds. But the fifteen seconds is the point. The hand contact, the visible artifact, the small assembly of the listening occasion. All of it adds up.

The puzzle is not really a puzzle. People pay for records because the record carries something the file cannot carry. Presence. Ownership. Ritual. The continuity of a self over time.

That is one thing. The harder question is whether any of this transfers.

The same constructs show up in business relationships

The same constructs are all over how companies and people work together.

A standing weekly meeting with an account manager is a ritual artifact. The named account manager is a psychological-ownership artifact. A handshake in a lobby is a haptic artifact. The shared Google Doc with both companies' annotations is an investment-of-self artifact. None of these are necessary in the strict information-transfer sense. The status update could be an email. The account manager could be a chat queue. The handshake could be skipped. The doc could be one company's intake form.

But they are not skipped, in the relationships that work. Companies that strip them out report a drop in trust, retention, and renewal rates that no efficiency calculator predicted. The same psychological mechanics that make a record collection feel like yours make a vendor relationship feel like yours.

Consider the employer-employee relationship. The named manager is a psychological-ownership artifact. The standing one-on-one is a ritual artifact. The team off-site, the swag with the company logo, the desk that someone sits at every day, every one of these is an investment-of-self artifact in the Pierce, Kostova, and Dirks framework. None of them are technically required for an employee to do the work. All of them are correlated with whether the employee stays, refers other people, and does the work as if the company belonged to them.

The reason the constructs travel from the consumer side to the business side is that the underlying psychology does not know the difference. Psychological ownership is a feature of how humans process attachment to objects, work, organizations, and ideas. The construct lives in the person, regardless of what the person is becoming attached to. The vinyl record and the long-time vendor and the named manager all activate the same circuitry.

That is the bridge. If those constructs explain why I will pay thirty dollars for Kind of Blue on vinyl when I have unlimited streaming for almost free, the same constructs explain why a company should think hard about what it is actually buying when it replaces the sales rep with a chatbot, the recruiter with a screening algorithm, the customer-service voice on the phone with a synthesized one.

The chatbot saves money on the unit transaction. The question is what it costs on the relationship.

What the AI-vs-human research actually shows

This is the load-bearing section. There is now a body of behavioral research on when people prefer AI, when they prefer humans, and what the substitution does to behavior. The research does not say AI is bad and humans are good. It says the choice is task-dependent, and the dependency is measurable.

Noah Castelo, working with Maarten Bos and Donald Lehmann at Columbia Business School, wanted to know why some people were comfortable letting algorithms make decisions for them while others refused. They published "Task-Dependent Algorithm Aversion" in the Journal of Marketing Research in 2019. The split they found was clean. People trusted algorithms for objective tasks, the kind with a verifiable correct answer. They refused algorithms for subjective tasks, the kind that depended on taste or values or judgment. Forecasting a stock price felt like an objective task to most respondents, and the algorithm was welcome. Picking a date felt like a subjective task, and the algorithm was rejected.

Chiara Longoni and Luca Cian extended that finding in nine studies published in the Journal of Marketing. They called it the word-of-machine effect. People preferred AI recommendations when the consumption goal was utilitarian, like finding the most fuel-efficient car or the longest-lasting battery. They preferred human recommendations when the consumption goal was hedonic, like which movie would move them emotionally or which restaurant would feel right for an anniversary. The effect held across product categories.

Now the cleanest empirical anchor in the entire stack.

79.7%

Drop in purchase rate when chatbot identity was disclosed before an outbound sales call. Field experiment, n > 6,200 customers at a Chinese financial-services company. Undisclosed chatbots had performed as well as proficient human reps on the same product.

Source: Luo, Tong, Fang, Qu (2019), "Frontiers: Machines vs. Humans," Marketing Science 38(6).

Xueming Luo, working with Siliang Tong, Zheng Fang, and Zhe Qu, ran a field experiment with a Chinese financial-services company. The experiment is published as "Frontiers: Machines vs. Humans" in Marketing Science. They had more than six thousand customers receive outbound sales calls. Some calls came from human reps. Some came from AI chatbots. Some came from chatbots that were disclosed as chatbots before the conversation began. Some came from chatbots that were not disclosed.

The undisclosed chatbots performed as well as proficient human reps on purchase rates.

Then they disclosed the chatbot identity. Purchase rates dropped seventy-nine point seven percent.

The drop is published behavior, not commentary. A field experiment with more than six thousand observations and a published methodology. People who would buy the financial product from a human, and who would buy it from an AI they thought was a human, refused to buy it from an AI they knew was an AI. The transaction was the same. The outcome was the same. The product was the same. The disclosure was the variable.

Vesna Granulo, Christoph Fuchs, and Stefano Puntoni then asked whether the human-vs-machine preference was stronger in symbolic consumption contexts than in utilitarian ones. They published the result in the Journal of Consumer Psychology. People preferred human labor over robotic labor more strongly when the consumption was symbolic, the kind that signals identity, taste, or values. The effect held independently of perceived quality differences. People wanted the human to have made it because the human having made it was part of what they were buying.

This is the direct vinyl analog. The vinyl record is symbolic consumption. The wedding photographer is symbolic consumption. The coach is symbolic consumption. The small batch of bourbon is symbolic consumption. People will pay a premium for human involvement in any of those even when an automated alternative is cheaper, faster, and technically equivalent on quality.

Martin Mende, with Maura Scott, Jenny van Doorn, Dhruv Grewal, and Ilana Shanks, ran seven experiments on what happens to consumers when service is delivered by humanoid robots. The result was published in Journal of Marketing Research as "Service Robots Rising." Humanoid-robot service triggered compensatory consumer behavior, driven by an identity threat the consumer was not consciously aware of. People who interacted with the robot ordered more food, indulged more, and reported wanting to feel more human afterward.

Granulo, Fuchs, and Puntoni had earlier published a paper in Nature Human Behaviour on the asymmetry of how people felt about robotic job replacement. The headline finding: people preferred that humans replace humans, except when their own job was on the line, in which case they preferred a robot replace them. The asymmetry says something about how identity-laden the work relationship is. Humans being replaced by humans preserves the social order of work. Humans being replaced by robots disturbs it, except when the disturbance falls on someone else.

Bertram Malle and Yochanan Bigman, working with Kurt Gray, wanted to know whether people accepted machines making moral decisions. They published "People Are Averse to Machines Making Moral Decisions" in Cognition. The aversion held across driving scenarios, legal sentencing, medical triage, and military targeting. People wanted humans in the loop on moral judgments even when machines could be shown to make those judgments more consistently.

For honest balance, the picture also includes counter-evidence. Berkeley Dietvorst, Joseph Simmons, and Cade Massey published "Algorithm Aversion" in 2015. They showed that people lose confidence in algorithms faster than they lose confidence in humans after observing identical errors. The algorithm gets one strike. The human gets several. That asymmetry is real.

But Jennifer Logg, Julia Minson, and Don Moore published "Algorithm Appreciation" in 2019, showing that in some forecasting contexts people actually preferred algorithmic advice over human advice, because they assumed the algorithm was less subject to bias. The appreciation held until the algorithm was visibly wrong, at which point Dietvorst's aversion kicked back in.

Humans are valued more than AI for symbolic, hedonic, relational, identity-laden, and morally weighted work. AI is preferred or treated as equivalent for objective, utilitarian, repeatable, and forecast-style work. The asymmetry is large enough to show up in field experiments at the seventy-nine percent scale.

For an operator running an actual company, the question is no longer whether to use AI. The question is whether the work in front of you is closer to the symbolic side or closer to the utilitarian side. The answer determines whether the human in that role is a cost or an asset.

Where the analogy holds, bends, and breaks

The discipline of any analogy is naming where it stops working. Otherwise it becomes confirmation bias dressed up as insight.

Vinyl is discretionary consumer consumption. The buyer chose the record. The buyer also chose to spend the thirty dollars. There is no equivalent in B2B back-office workflows where the procurement decision is made by a finance committee against a cost-per-transaction spreadsheet, and the end user has no vote. The psychological-ownership constructs survive, but the people who experience the ownership are not the ones writing the check. That mismatch is real, and it explains why some companies make AI substitution decisions that maximize short-run unit cost and lose the relationship asset over twenty-four months.

Vinyl scarcity is real. Original pressings exist in finite numbers. The KISS Alive 2 Terre Haute pressing I bought is older than I am. There are not more of them being made. Human-labor scarcity in B2B is sometimes manufactured, sometimes structural, and sometimes neither. Skilled workers in tight markets are genuinely scarce. Customer-service-rep labor in markets with available labor pools is often not. Treating all human labor as equivalently scarce overstates the case for keeping humans in every role. Some roles are objectively well-suited to automation, and the math is straightforward.

Vinyl identity-display is visible. The record on the shelf is part of what guests see when they come over. The same is starting to be true of AI tooling on the corporate side. "We use Claude Opus for our analysis layer" is becoming a procurement-level identity claim, the way "we use SAP" was in the 1990s. The identity-display vector is no longer exclusively human. AI itself is becoming an identity-laden purchase, especially at the leadership level where the AI choice signals sophistication, capability, and forward posture. The vinyl/human-labor analogy holds for the symbolic consumption of services delivered. It bends when the AI itself is the symbolic artifact.

The analogy also breaks at the high end of capability. AI can now do certain knowledge-work tasks better than the median human professional, especially in synthesis, document review, code generation, and pattern recognition across large corpora. The behavioral preference for human work does not survive a large-enough capability differential, especially for objective tasks. A buyer who refused chatbot service in 2019 buys from one in 2026 without noticing, when the chatbot is good enough that the disclosure no longer creates the seventy-nine percent purchase drop. Luo's experiment was a snapshot at a point in time. The capability frontier has moved. The disclosure penalty is probably smaller now. It is also probably still real for symbolic and identity-laden categories.

The honest read: the behavioral science supports the case for human work in symbolic, relational, identity-laden, and moral domains. It supports AI work in objective, utilitarian, and repeatable domains. The skill of the next decade for any operator is being able to tell which work in their company belongs in which category, and being honest about it.

The operator's call

I run LaunchReady as an Agentic Micro Company. One founder, AI agents, dashboards, the whole substrate that gets called the "one-person, billion-dollar company" when the conversation drifts toward the maximalist version of the prediction. I have actually run the experiment. I have replaced expensive functions with AI agents and watched the unit economics work. I am also the right person to say what comes next.

The maximalist version of the prediction is not where this ends up.

The reason is the behavioral economics. A company that strips out every human relationship and replaces it with the most cost-efficient AI substitute is solving the unit-cost problem and creating a different problem at the same time. The company is also signaling, to every customer and every employee, that the symbolic consumption layer of the relationship has been zeroed out. The customers who were paying for the human-touch portion of the offering will, in many cases, stop paying. The employees who were investing identity in their work will, in many cases, leave or disengage. The cost savings will look real on the dashboard for one quarter and look like a different problem on the dashboard four quarters later.

The composition is what works. AI for the work that is utilitarian, objective, and repeatable. Humans for the work that is symbolic, relational, identity-laden, and moral. A leadership team that can tell those categories apart, in the specific context of its specific business, is doing the work that nothing else can do for them.

The 7 Levels of AI Proficiency is the operational instrument I built for this read. Seven defined stages anchored in observable behavior, from first awareness of AI through full operational orchestration of human-AI workflows. The instrument is currently at version one. It has face validity, an active item bank, and a published validation roadmap. Full psychometric validation is the next phase of work, not a finished claim. Free assessment at assess.launchready.ai. Under ten minutes. The output is a level placement, a personalized read on the dimensions where capability is strong and the dimensions where it is thin, and a starting point for the conversation about which work in the company belongs in which category.

The point of measuring proficiency, in the framework, is to make the composition decision visible. A leadership team running the assessment across itself sees, often for the first time, where the actual reads converge and where they diverge. The company that walks into the AI substitution decision with that read is doing different work than the company that walks in with a dashboard and a hunch.

I am back at home now, on a Sunday afternoon in May. The KISS Alive 2 Terre Haute pressing is on the platter. My wife is in the room. The phones are in another room. The record is forty-nine years old. The needle is dropped. The pops and the crackles come back. Then everything else does too.

That is the value of humans in the AI era. The behavioral science says it. The Sunday afternoon proves it.

That is what the composition costs and what it is worth.

Related reading: The Two-Person, $1.8 Billion Company. The 7 Levels of AI Proficiency framework. The White-Collar Factory manifesto.

Sources

  1. Atasoy, O., & Morewedge, C. K. (2018). Digital Goods Are Valued Less Than Physical Goods. Journal of Consumer Research, 44(6), 1343-1357. doi.org/10.1093/jcr/ucx102
  2. Pierce, J. L., Kostova, T., & Dirks, K. T. (2001). Toward a Theory of Psychological Ownership in Organizations. Academy of Management Review, 26(2), 298-310. doi.org/10.5465/amr.2001.4378028
  3. Pierce, J. L., Kostova, T., & Dirks, K. T. (2003). The State of Psychological Ownership: Integrating and Extending a Century of Research. Review of General Psychology, 7(1), 84-107. doi.org/10.1037/1089-2680.7.1.84
  4. Castelo, N., Bos, M. W., & Lehmann, D. R. (2019). Task-Dependent Algorithm Aversion. Journal of Marketing Research, 56(5), 809-825. doi.org/10.1177/0022243719851788
  5. Longoni, C., & Cian, L. (2022). Artificial Intelligence in Utilitarian vs. Hedonic Contexts: The "Word-of-Machine" Effect. Journal of Marketing, 86(1), 91-108. doi.org/10.1177/0022242920957347
  6. Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err. Journal of Experimental Psychology: General, 144(1), 114-126. doi.org/10.1037/xge0000033
  7. Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm Appreciation: People Prefer Algorithmic to Human Judgment. Organizational Behavior and Human Decision Processes, 151, 90-103. doi.org/10.1016/j.obhdp.2018.12.005
  8. Norton, M. I., Mochon, D., & Ariely, D. (2012). The IKEA Effect: When Labor Leads to Love. Journal of Consumer Psychology, 22(3), 453-460. doi.org/10.1016/j.jcps.2011.08.002
  9. Peck, J., & Shu, S. B. (2009). The Effect of Mere Touch on Perceived Ownership. Journal of Consumer Research, 36(3), 434-447. doi.org/10.1086/598614
  10. Mende, M., Scott, M. L., van Doorn, J., Grewal, D., & Shanks, I. (2019). Service Robots Rising: How Humanoid Robots Influence Service Experiences and Elicit Compensatory Consumer Responses. Journal of Marketing Research, 56(4), 535-556. doi.org/10.1177/0022243718822827
  11. Granulo, A., Fuchs, C., & Puntoni, S. (2019). Psychological Reactions to Human Versus Robotic Job Replacement. Nature Human Behaviour, 3(10), 1062-1069. doi.org/10.1038/s41562-019-0670-y
  12. Granulo, A., Fuchs, C., & Puntoni, S. (2021). Preference for Human (vs. Robotic) Labor Is Stronger in Symbolic Consumption Contexts. Journal of Consumer Psychology, 31(1), 72-80. doi.org/10.1002/jcpy.1181
  13. Luo, X., Tong, S., Fang, Z., & Qu, Z. (2019). Frontiers: Machines vs. Humans: The Impact of Artificial Intelligence Chatbot Disclosure on Customer Purchases. Marketing Science, 38(6), 937-947. doi.org/10.1287/mksc.2019.1192
  14. Bigman, Y. E., & Gray, K. (2018). People Are Averse to Machines Making Moral Decisions. Cognition, 181, 21-34. doi.org/10.1016/j.cognition.2018.08.003
  15. Voraritskul, M. (2024). Vinyl's Comeback: A Lesson for Human Creativity in the AI Age. The VerifiedHuman Collective (Medium). Acknowledged as prior treatment of the vinyl-creativity parallel; the present article extends the parallel into business and workforce with the underlying behavioral science attached.

Frequently Asked Questions

Why do people pay for vinyl records they could stream for free?

Five experiments by Atasoy and Morewedge (2018) in the Journal of Consumer Research showed people consistently pay more for, and are more willing to purchase, physical versions of identical goods. The premium comes from psychological ownership (Pierce, Kostova, Dirks), the IKEA effect (Norton, Mochon, Ariely 2012), the touch-ownership link (Peck and Shu 2009), and ritual investment, not from any technical advantage of the medium.

Does behavioral economics actually support keeping humans in business roles?

Yes, in specific categories. Castelo et al. (2019) and Longoni and Cian (2022) showed people prefer humans for subjective, hedonic, and identity-laden tasks; they accept AI for objective, utilitarian tasks. Granulo, Fuchs, Puntoni (2021) found the human preference is strongest in symbolic consumption contexts.

What is the 79.7% chatbot disclosure finding?

Luo, Tong, Fang, and Qu (2019) ran a field experiment with more than six thousand customers at a Chinese financial-services company. Undisclosed AI chatbots performed as well as proficient human reps on purchase rates. When the chatbot identity was disclosed before the conversation, purchase rates dropped 79.7%. Same product, same script, same outcome. Only the disclosure changed.

Will the one-person billion-dollar company prediction come true?

Probably not in the maximalist version. Companies that strip out every human relationship to maximize AI substitution are solving a unit-cost problem and creating a relationship-asset problem at the same time. The composition that works pairs AI for objective and utilitarian work with humans for symbolic, relational, identity-laden, and moral work. The 7 Levels of AI Proficiency framework was built to make that composition decision visible.

What is The 7 Levels of AI Proficiency?

A seven-stage framework anchored in observable behavior, from first awareness of AI through full operational orchestration of human-AI workflows. The instrument is at version one, with face validity, an active item bank, and a published validation roadmap. Free assessment at assess.launchready.ai.

Harrison Painter
Harrison Painter
AI Business Strategist. Founder, LaunchReady.ai and AI Law Tracker.

Harrison helps teams build AI systems that cut cost and grow revenue. Nearly 20 years of business experience. 2.8M YouTube views. Founder of LaunchReady.ai and the 7 Levels of AI Proficiency framework.

Connect on LinkedIn

Find your AI Proficiency level

The free 7 Levels of AI Proficiency assessment places you across seven stages of AI capability. Under ten minutes. Research-backed scoring with a published validation roadmap.

Get the weekly briefing

LaunchReady Indiana delivers AI news, compliance updates, and case studies for Indiana leaders. Every Tuesday. Five minutes.

Subscribe free