How This Founder Trained His AI to Avoid Rickrolling Users

Flo Crivello was closely monitoring the performance of the AI assistants developed by his company, Lindy, when he encountered an unusual situation. A new client had requested a video tutorial to better understand how to navigate the platform. To Crivello's surprise, the Lindy AI assistant provided a video—this was when he realized something was amiss. There isn't a video tutorial available.

“We saw this and thought, ‘What video did it send?’ Then we realized, ‘Oh snap, this is a problem,’” Crivello explained.

The video sent to the client turned out to be the iconic music video for Rick Astley’s 1987 dance-pop classic, “Never Gonna Give You Up.” In other words, the client experienced a classic Rickroll—courtesy of an AI.

Rickrolling is a well-known bait-and-switch internet meme that’s been around for over 15 years. One of the meme's most memorable moments occurred when Rockstar Games released the highly anticipated “Grand Theft Auto IV” trailer on its website. The website could not handle the overwhelming traffic, leading some users to share the video on platforms like YouTube. However, one user on 4chan decided to share a link to Rick Astley's “Never Gonna Give You Up” instead. Seventeen years later, the meme persists, with the music video amassing over 1.5 billion views on YouTube.

This internet prank is so widespread that it has inevitably been absorbed into large language models like ChatGPT, which powers Lindy.

“The way these models function is by predicting the most probable next sequence of text,” Crivello noted. “So it thinks, ‘I’m going to send you a video!’ Following that, what’s the most likely URL? YouTube.com. And then what comes next?”

Crivello shared that out of millions of responses, Lindy only Rickrolled clients on two occasions. However, addressing the error was essential.

“The astonishing part about this new era of AI is that to resolve this issue, all I needed to do was add a line to what we call the system prompt—which is present in every Lindy AI instance—and it simply stated: don’t Rickroll users,” he stated.

Lindy’s unintended Rickroll raises questions about how much internet culture will influence AI models, as these systems are trained on vast data from the web. The incident is particularly striking because the AI independently reproduced such a niche user behavior, leading to its misstep. Yet, elements of internet humor infiltrate AI in other ways, evidenced by Google’s experience when it incorporated Reddit data for its AI training. Known for its user-generated, often satirical content, Google’s AI concluded that adding glue could help cheese adhere better to pizza dough.

“In Google’s case, it wasn’t necessarily generating falsehoods,” Crivello explained. “It was based on existing content—it’s just that the content was misleading.”

As large language models continue to evolve, Crivello believes instances like this will become rarer. He also emphasizes that correcting such issues is becoming increasingly straightforward. In Lindy's early days, if an AI assistant couldn’t fulfill a request, it would assure the user that it was working on it but ultimately failed to deliver (which, amusingly, sounds rather human).

“We found it very challenging to fix that problem,” Crivello acknowledged. “But once GPT-4 was released, we added a simple directive: ‘If you can’t complete the user’s request, just let them know.’ And that solved the issue.”

Interestingly, the good news for the client who got Rickrolled is that they may still be unaware of the mishap.

“I’m not even sure if the customer noticed,” he said. “We followed up promptly with the correct video link, and the client didn’t mention anything about the initial link.”

Most people like

Find AI tools in YBX