Chapter 55: The Hearing
The scandal broke on a Thursday in October 2014, and it started with a university student in Busan who had used Aria’s learning module to do something it was never designed to do.
His name was Kang Minsoo. He was a computer science graduate student at Pusan National University, and he had discovered that Aria’s task detection API—the same API that analyzed email patterns and calendar schedules to organize work tasks—could be repurposed to monitor an individual’s daily routine with frightening precision. By feeding the API a target person’s public social media posts, calendar invitations, and any emails he could access, Minsoo had built a stalking tool that predicted where his ex-girlfriend would be at any given hour of the day.
The ex-girlfriend, a journalism student named Yoo Seoyeon, had noticed the pattern—Minsoo appearing at her favorite coffee shop, her gym, her study group—and filed a police report. The investigation revealed the Aria API integration, and the story went national within hours.
“Aria’s AI Used for Stalking: How Korea’s Most Popular Productivity App Became a Surveillance Tool”
The headline was in every major Korean newspaper by Friday morning. By Friday afternoon, it was international—Reuters, the BBC, TechCrunch. By Friday evening, three members of the National Assembly had called for an investigation into Aria’s data practices, and the Korea Communications Commission had opened a formal inquiry.
Dojun read the first article at 5 AM, sitting in his home office while Hajun slept in the next room—the particular 5 AM quiet of a household with a toddler, where silence was precious and fragile and could shatter at any moment.
The article was accurate. Every fact was correct. Minsoo had used Aria’s public API exactly as described. The task detection engine, designed to find patterns in work-related data, worked equally well at finding patterns in personal data. The same algorithm that organized your meeting schedule could, with different inputs, map your daily movements.
The technology was not at fault. The technology was doing exactly what it was designed to do—detecting patterns and predicting behavior. The fault was that Aria had made this capability available through a public API without adequate safeguards against misuse.
“This is different from the privacy crisis,” Hana said when she joined him in the home office at 6 AM, Hajun on her hip, her face carrying the particular gravity of someone who understood that this crisis was not about data collection but about something deeper. “The privacy thing was about what we knew. This is about what our technology enables. We didn’t stalk anyone. But we built the tool that made stalking easier.”
“We built a pattern detection engine,” Dojun said. “The same engine that helps a lawyer organize case files can help a stalker predict movements. The technology is neutral. The application isn’t.”
“Neutral technology is a myth, Dojun. Every tool has implicit values. A knife is neutral—but a kitchen knife is designed differently from a combat knife, because the designer chose to optimize for different outcomes.” She shifted Hajun to her other hip. “Our API was designed for productivity. But we didn’t design safeguards that prevented surveillance. That’s not neutrality. That’s negligence.”
She was right. He knew she was right because he had lived through this exact reckoning—not with Aria, but with Prometheus Labs, fifteen years from now in a timeline that no longer existed. The AI ethics debate of 2025 had consumed Prometheus for two years and cost the company billions in regulatory compliance and reputational damage.
But this wasn’t 2025. This was 2014. The AI ethics conversation was barely beginning. The frameworks, the regulations, the societal consensus about responsible AI—none of it existed yet. They were among the first companies to face this question, and the answer they gave would shape the conversation for years to come.
“We need to go to the Assembly,” Dojun said.
“The National Assembly? They haven’t called us.”
“They will. Three members have already called for an investigation. If we wait to be summoned, we’re defensive. If we go voluntarily, we’re proactive.” He set down his coffee. “I want to testify. Voluntarily. In person. Before they ask.”
“Testify about what?”
“About what we built, what went wrong, and what we’re going to do about it. Not as a corporate defense—as a personal statement. ‘I built this technology. I’m responsible for how it’s used. Here’s what I’m doing to fix it.'”
“That’s admitting fault.”
“It is admitting fault. Because it is our fault. Not the stalking—that’s Minsoo’s crime. But the absence of safeguards? The API design that didn’t consider misuse? That’s ours.”
Hana was quiet for a long time. Hajun babbled in her arms, reaching for the coffee cup with the optimistic persistence of a toddler who believed all objects were meant for him.
“Okay,” she said. “We go to the Assembly. But we don’t just apologize. We bring a plan. A real, implementable, industry-leading plan for responsible AI. Not because the government is forcing us—because we believe it’s right.”
“That’s exactly what I was thinking.”
“I know. I married a time traveler who’s seen the future of AI ethics. For once, your impossible knowledge is exactly what we need.” She handed Hajun to Dojun. “Hold the baby. I’m going to call the design team. We’re building a responsible AI framework. And it’s going to be beautiful.”
The Aria Responsible AI Framework was designed in two weeks and implemented in four. It was, by any measure, the most comprehensive AI ethics policy in the Korean technology industry—and one of the first in the world.
The framework had four pillars:
Transparency: Every AI-powered feature in Aria would display a clear indicator of what data it was using, what predictions it was making, and why. Users could see the algorithm’s reasoning, not just its output.
Consent: All behavioral analysis required explicit, informed, revocable consent. Opt-in only. No default collection. No buried checkboxes.
Limitation: The API would include use-case restrictions. Pattern detection could only be applied to the user’s own data. Third-party data analysis required both parties’ consent. Anti-surveillance checks would flag API calls that matched known misuse patterns.
Accountability: An independent ethics board—three external members, including an ethicist from Seoul National University, a privacy lawyer, and a consumer advocate—would review Aria’s AI practices quarterly and publish public reports.
“This goes beyond what any regulator is asking for,” David Yoo said during a board call. “The Assembly investigation hasn’t even started. You’re self-imposing restrictions that could limit your product capabilities.”
“They will limit some capabilities,” Dojun acknowledged. “The API restrictions will reduce third-party integration options by approximately 30%. The anti-surveillance checks will add latency to certain API calls. The ethics board will have veto power over new features that raise concerns.”
“And you’re doing this voluntarily.”
“Because it’s right. And because if we don’t do it voluntarily, the government will do it for us—badly, slowly, and in a way that doesn’t understand the technology.” He paused. “David, I’ve seen what happens when technology companies wait for regulation instead of leading it. The regulation is always worse than self-governance. Always. The companies that lead the ethics conversation get to shape it. The companies that resist it get shaped by it.”
“You’ve ‘seen’ this?”
“I have strong intuitions about regulatory dynamics.”
“Your intuitions continue to be suspiciously specific.” But David didn’t argue further. “Fine. Implement the framework. And make sure the Assembly testimony is bulletproof.”
The National Assembly hearing was held on a Wednesday in November 2014, in a committee room that smelled of old wood and institutional coffee. Fifteen members of the Science, ICT, and Future Planning Committee sat in a raised semicircle. Two dozen journalists occupied the gallery. Camera crews from four networks lined the walls.
Dojun sat at the witness table alone. No lawyers. No PR handlers. No corporate delegation. Just a twenty-eight-year-old CEO in a dark suit, with a prepared statement and the weight of eight years of building something that had just been used to hurt someone.
“Chairman, committee members,” he began. “My name is Park Dojun. I am the co-founder and CEO of Aria. I’m here today because technology that my company built was used to stalk a young woman in Busan. I am here voluntarily, not because I was summoned, but because I believe the person who built the tool bears responsibility for how it is used.”
The committee room was very quiet.
“Aria’s task detection engine was designed to help people organize their work. It analyzes patterns in emails, calendars, and files to identify tasks and predict what the user needs. It was never intended for surveillance. But intent is not a defense. A knife maker who sells a knife that is used to commit a crime is not a criminal—but a knife maker who doesn’t consider how their product might be misused is not responsible.”
He paused. Let the words settle.
“We were not responsible. Our API was open, powerful, and insufficient ly safeguarded. We designed for capability without designing for consequence. That was a failure of imagination, and it was my failure as the leader of this company.”
He spent twenty minutes describing the Responsible AI Framework—the four pillars, the technical implementations, the independent ethics board. He showed the committee the anti-surveillance detection system, the consent mechanisms, the transparency indicators. He demonstrated, live, how a user could now see exactly what Aria’s AI was doing with their data—every prediction, every pattern, every inference, displayed in plain language.
“This framework is not a response to today’s hearing,” he said. “It was implemented before the hearing was announced. Not because we anticipated this specific incident, but because we knew—we should have known sooner—that powerful technology requires powerful responsibility.”
The Q&A was two hours long. Some committee members were hostile—”You profited from technology that endangered a woman’s safety.” Some were curious—”How does the anti-surveillance detection work?” Some were philosophical—”Where does corporate responsibility end and individual responsibility begin?”
Dojun answered every question. Honestly, thoroughly, without deflection. When he didn’t know the answer, he said so. When the question was about Minsoo’s criminal liability rather than Aria’s corporate responsibility, he deferred to the legal process. When a committee member asked whether AI should be regulated the same way pharmaceutical companies were regulated, Dojun said:
“AI is not a drug. But it affects people’s lives the way drugs do—it changes behavior, shapes decisions, and can cause harm if misused. The regulatory framework should reflect that. Not identical to pharmaceutical regulation, but with the same underlying principle: the burden of proof for safety should be on the maker, not the user.”
The hearing ended at 4 PM. Dojun walked out of the Assembly building into a November afternoon that was cold and gray and felt, somehow, like the beginning of something rather than the end.
His phone had 47 messages. He read them in the taxi home.
Hana: I watched the livestream. You were honest, clear, and human. The framework presentation was excellent—the transparency demo made three committee members lean forward. That’s a design win. I love you.
Seokho: Watched the hearing. Your testimony will be cited in every AI ethics paper for the next decade. You just defined the conversation. Also, your suit was surprisingly well-fitted. Hana’s influence?
Kim Taesik: Park. “The burden of proof for safety should be on the maker, not the user.” That sentence will be quoted in textbooks. I’m proud of you. — Kim T.
His mother: I saw you on TV! In the Assembly! You were wearing a suit! Why didn’t you tell me you were going to be on TV? I would have made you eat breakfast! Mrs. Kang recorded it on her phone. She says you looked thin but spoke well. I agree with both observations. Come Saturday. I’ll make your favorite.
And one message from Yoo Seoyeon—the woman who had been stalked. Forwarded through Jiyoung, who had reached out to offer support on behalf of the company.
Mr. Park. I watched your testimony today. I appreciate your honesty and your willingness to take responsibility. The technology didn’t hurt me—a person hurt me. But the technology made it easier. Your framework is a step toward making sure it’s harder for the next person. That matters. — Yoo Seoyeon
Dojun read her message three times. Then he put the phone away, looked out the taxi window at Seoul passing by—gray buildings, bare trees, ten million people going about their lives—and thought about what it meant to build something powerful and be worthy of the power.
In his first life, he had never faced this question. Prometheus Labs had been powerful, but its power had been contained within the corporate ecosystem—enterprise clients, business applications, the comfortable abstraction of B2B technology. Aria was different. Aria lived in people’s pockets. It read their emails, predicted their needs, learned their patterns. It was intimate in a way that enterprise software never was, and intimacy came with responsibilities that no business school taught.
“Technology makers are not neutral,” he had told the Assembly. “We are not just engineers. We are architects of how people live. And architects who build without considering the people who will live in their buildings are not architects. They are hazards.”
He meant it. In both lifetimes.
The taxi crossed the Han River. The water was dark, reflecting the city’s lights in wavering columns. Somewhere ahead, Hana was waiting with Hajun, and dinner, and the particular warmth of a home that contained the most important people in his world.
The hearing was over. The framework was in place. The conversation about AI ethics—the conversation that would define the next decade of technology—had begun, not in Silicon Valley or in a government office, but in a committee room in Seoul, started by a twenty-eight-year-old CEO who had lived twice and learned, the hard way, that the most important code was not the code that worked but the code that was worthy of the people who used it.
The japchae goes in the front. The safeguards go in the design. And the burden of proof goes on the maker.
Always.