Z Potentials | Zhenyu Shen: How an Art Toy Brand Built the World's Leading AIGC Model Platform
Z Potentials invited Zhenyu Shen, founder of Tensor.Art, to give a talk.
As a product visionary personally recruited by Yiming Zhang (founder of ByteDance) and a witness to ByteDance’s meteoric rise, Zhenyu has since undergone two entrepreneurial transformations—from TuChong to Islands.
In this conversation, Zhenyu shares his unique insights on the future of AI: “Every company will eventually become an AI company,” and “The AI revolution cannot be led by just a few.” He firmly believes that open-source models will dominate the future, noting that “technological secrets are flowing faster than ever.” This belief is also a key reason behind his strategic move to build an AI model platform in parallel, following the initial success of Islands.
As a platform that has already served over 100,000 model trainers and more than 500,000 models, how does Tensor.Art stand out amid fierce global competition? Zhenyu’s answer is to build a dual moat: “model scale and creator scale,” while firmly adhering to the business philosophy that “low prices lead to greater scale.” The “begin with the end in mind” mindset he learned at ByteDance enables him to “cut through short-term noise and see what is inevitably going to happen”—a perspective that continues to guide every decision he makes in the AI era.
In Zhenyu’s view, “AI technology will eventually become as fundamental and widespread as water and electricity,” and “the capabilities of a single large model are actually quite limited.” He believes we need a vast number of fine-tuned models to address specific niche scenarios. As he puts it, “AI will change everything in the next decade,” and Tensor.Art is his ticket to participate in this transformation.
Whether you're a tech entrepreneur, an AI practitioner, or a product manager, this conversation will offer you a glimpse into how a top-tier founder is building the next generation of AI infrastructure—at the intersection of technology and business—with long-term vision and an open mindset. Enjoy! :)
Every company will eventually become an AI company. There will no longer be a distinction between AI and non-AI companies, because AI will transform every aspect of how we build products and solve problems.
Tensor.Art is essentially our company Echo Tech’s ticket into the AI world. Through this entry point, we aim to establish our place in the AI ecosystem—both in the community and in model infrastructure.
95% of the model trainers on our platform don’t know how to code. As long as they’re interested in AIGC and understand some basic principles, they can train valuable models.
One key trigger for us to build Tensor.Art was the standardization of two core aspects of AI: the runtime environment for algorithms and the file format for models.
In the long run, the moat of a model-centric community will mainly lie in two dimensions: the scale of the models and the scale of the creators.
Chatbots and today’s agents are not the final form of AI applications. Future interactions will be more frequent, more fragmented, and capable of self-iteration.
AI technology will eventually become as fundamental and widespread as water and electricity—just like web technologies 20 years ago. Open-source and closed-source models will coexist, but the open-source model is more conducive to attracting global talent to participate in the AI revolution.
The “begin with the end in mind” mindset enables us to cut through short-term noise and see the things that are bound to happen, even if many people either aren’t ready to face them or think it’s still too early.
01 From a Peking University Coding Prodigy to a Serial Entrepreneur: A Startup Journey Driven by Both Business and Passion
ZP: Mr. Shen, could you start by introducing yourself? What are some key experiences from your past?
Zhenyu Shen: Sure, let me start with a bit of background. I've loved coding since I was a kid. In high school, I was admitted to Peking University through a recommendation based on my performance in computer science competitions, and later I went on to earn a graduate research position in the university’s AI lab within the computer science department. From the early days of classic machine learning algorithms to the rise of neural networks like CNNs and RNNs, and now the Transformer era, I’ve witnessed the entire evolution of AI algorithms as well as the commercialization and productization of AI technologies.
Driven by my passion for programming and the sense of accomplishment that comes from building products, I started my first company during undergrad—TuChong, a community for photography enthusiasts—which was later acquired by ByteDance. While at ByteDance, I reported directly to Yiming Zhang and worked on several products including Time Album, Douyin (TikTok in China), and Puff, during what turned out to be a critical phase in the company’s rise.
ZP: Why did you choose to start your second venture as a art toy company—Islands?
Zhenyu Shen: After leaving ByteDance, I founded an interest-based platform called Islands. Many people asked me why I chose to start a collectibles company after ByteDance, but in fact, what I’m really building isn’t a collectibles business—it’s a platform around young people’s hobbies and interests. We just happened to choose collectibles as the initial entry point (our first product was a WeChat Mini Program called Collectibles Tribe (predecessor of Islands).
It was a decision driven by both personal passion and business thinking. To be honest, if I were purely after financial returns, I probably wouldn’t have started this company—Islands is not the most efficient path to profit. I’ve long been an early enthusiast and contributor in niche communities like photography, indie games, underground idols, and TCG (trading card games). Back at ByteDance, I had already been thinking about the potential of these interest-based verticals, but at the time each category had only a few million users at most—not big enough for a scale-driven company like ByteDance. After all, large user bases are essential to support highly efficient monetization models.
But in recent years, things have changed. These once-niche hobbies have gradually gone mainstream and become popular choices among the younger generation. When this kind of shift happens—when quantity becomes quality—I re-evaluated the opportunity. That’s why I chose to start a new venture in 2019. I believe every enthusiast has unmet needs that deserve to be addressed. Instead of making 1,000 separate products, why not build one multi-category community platform that can support hundreds or even thousands of hobby segments through an efficient shared backend and reusable product models?
That’s also why we named the company “Islands,” which means “A Thousand Islands”—each interest is its own island. Today, Islands covers dozens of categories and thousands of IPs, including collectibles, trading cards, anime merchandise, fan-made photo cards, murder mystery games, and retro gaming. In these areas, we’ve become the largest trading platform and interest-based community. We’ve also cultivated vibrant user communities in figure modeling, console gaming, tabletop RPGs, and original character creation (OC).
ZP: How has your experience at ByteDance influenced you?
Zhenyu Shen: When Yiming Zhang asked me if I wanted to join him, I said yes without even thinking. At the time, I actually had an acquisition offer from a publicly listed company on the A-share market—if I had taken it, I could have achieved financial freedom right away. But I still chose ByteDance. The main reason was that during my time building TuChong, I realized how important content distribution was, and I believed recommendation algorithms were the best solution for it.
My time at ByteDance completely changed me. When I was working in TuChong, I had a pure programmer mindset. I only focused on product features and user experience, with no real understanding of business models or organizational management. You could say Yiming Zhang personally taught me how to build products, set strategies, and become a real entrepreneur.
The most important things I learned were two core methodologies. The first is the importance of organization. A great company shouldn’t rely on a visionary CEO alone, but rather on building mechanisms that make it easier for the organization to consistently do the right thing—treating the company itself as a product to be developed. The second is the “begin with the end in mind” mindset. Every day, we face a lot of noise—short-term trends, competitor moves—but in the long run, much of that turns out to be irrelevant. Some things are bound to happen, yet many people either don’t want to face them or think it’s still too early.
Taking ByteDance as an example: as early as 2014, we were already discussing what the internet content ecosystem would look like when mobile internet increased user scale tenfold, when smartphones had high-speed bandwidth and powerful cameras. What kind of content formats would we need? How would users consume content? Few people were thinking deeply about these questions.
This kind of thinking leads to a chain of discoveries. For instance, we all know that as smartphones get more powerful, content inevitably evolves from text to images to video. That may sound obvious now, almost like a cliché. But the real question is: what the video should be like? Long or short? Horizontal or vertical? UGC (user-generated) or PGC (professionally-generated)? No one knew for sure. So Yiming chose all-out approach experimentation: we tried video on Toutiao first; it didn’t feel enough, so we added video to Neihanduanzi; we found Neihanduanzi’s humorous tone too limiting, so we built Huoshan Video; and when we saw that Huoshan couldn’t break free from Kuaishou’s shadow, we went further and mimicked Musical.ly. That all-out approach was the result of a "begin with the end in mind" mindset—since video was destined to be the next major medium, we would stop at nothing to find the right way to unlock its potential.
02 The AI 'Ticket-to-Board' Theory: Why Does Islands Need an AI Engine?
ZP: Could you briefly introduce what Tensor.Art does? How is it progressing so far?
Zhenyu Shen: Tensor.Art is a hosting platform and sharing community for AIGC (AI-Generated Content) models. In today’s AI image generation field, open-source models like Stable Diffusion and Flux are widely used. Many model trainers continue to fine-tune these foundational models to produce even more impressive results—these are known as checkpoints or LoRA models. They upload and host these fine-tuned models on Tensor.Art, allowing other content creators to use them for image and video creation with zero barriers to entry.
Tensor.Art serves developers and designers globally. We currently have over 2 million users, more than 500,000 models, and generate over 2 million images daily. In terms of model volume, number of trainers, and image generation activity, Tensor.Art has surpassed other players in the space, becoming the world’s leading open-source platform for image and video AI models.
ZP: What’s the background behind Islands incubating Tensor.Art? Why build an AI product?
Zhenyu Shen: A lot of people, including investors, have asked me why Islands is building an AI product that seems unrelated to our core business indeed.
The answer is simple: over the next decade, AI will be the main driving force behind the entire industry. Its inevitability is even greater than the short video revolution we talked about earlier. AI will transform every aspect of how we build products and solve problems. There will no longer be a distinction between AI and non-AI companies—eventually, every internet company will become an AI company. Although we don’t yet have a clear blueprint for how Islands will be rebuilt with AI, we certainly don’t want to be left out of the AI revolution. I wanted to activate our AI engine as early as possible and on top of that, my own academic background is in AI algorithms.
The AI industry can be divided into three layers: at the bottom are computing power and foundational models, in the middle are platforms, and at the top are applications. For us, building foundational large models isn’t realistic for us, and the application layer is highly uncertain. Even now, we still don’t have a clear picture of how humans should interact with AI in the future, or how AI will reshape our lives. So, we chose the platform layer instead. Platform building is one of my strengths—I’ve built four platform products before. When it comes to the inference optimization and engineering capabilities required to run an AI model platform, we’re one of the leading teams in the industry.
Tensor.Art is essentially our company, Echo Tech’s, ticket into the AI world. Through this entry point, we’ve secured a position in the AI community and infrastructure layer, giving us the ability to participate in larger-scale AI innovations at any time. From an organizational perspective, it also gives me an ambitious enough vision to attract and retain top-tier AI talent. That’s the core motivation behind Islands launching Tensor.Art. In fact, this synergy is already showing results. The AI team hasn’t just helped us carve out a place in the open-source model space; it’s also brought tangible growth dividends to Islands.
For example, last year Islands launched a photo-based recognition feature, through which the enthusiasts can simply take a picture of an item they’re interested in, and our AI model automatically identifies what card, blind box, or anime collectible it is, how much it’s worth, what brand and release it’s from, and who’s selling it. This AI-powered recognition feature has brought over 10 million new users to Islands and multiplied our daily active user base several times over. We owe all of this to our Tensor.Art team.
This kind of technological synergy allows the two products to empower each other and create a positive feedback loop. Going forward, we’ll continue to deepen this integration and make AI the core engine driving the growth of our business.
03 The Battle Between Open-Source and Closed-Source: Why Every Company Will Eventually Become an AI-Driven Enterprise
ZP: As of 2025, how do you view the landscape between closed-source and open-source models?
Zhenyu Shen: While OpenAI’s algorithm team has maintained a temporary lead through a closed-source approach, the number of engineers who can actually join OpenAI is extremely limited. That means a vast pool of global talent is being excluded from the AI revolution—a clearly flawed form of social division of labor from a macro perspective. For the rest of the brilliant minds out there, the smartest choice is to participate in the development, iteration, and application ecosystem of open-source models.
Likewise, for a company, choosing to use a closed-source model service is essentially working with a black box. It’s costly, difficult to control, and not conducive to secondary development. In contrast, open-source models don’t have these limitations—they give companies the freedom and flexibility to customize and build on top. We believe an effective model is: companies focus on their vertical domains while building on top of open-source foundation models. These companies bring domain-specific data and use cases, which allows them to fine-tune models in depth and create a positive flywheel: real-world scenarios generate data, data improves model performance, and better models attract more users.
ZP: If models are open-source, how can companies still build technical moats?
Zhenyu Shen: Every time a new technology emerges, people tend to overestimate its difficulty and believe that technical moats can lead to monopolistic advantages. But the past 30 years of development in the IT industry have already proven that technical moats can’t last. Back in 2000, being able to build a website qualified you as a tech company. In 2012, if you could build an app, you were considered cutting-edge. Now, the industry is once again mythologizing the power and difficulty of AI large models. But in my personal experience, the core principles of the Transformer architecture are understandable even for an average college student, and model inference and training are not inherently complex. AI isn’t some kind of high tech—it will eventually become as ubiquitous as water and electricity.
In reality, most algorithm engineers spend the bulk of their time tuning parameters. Scientists who can actually invent breakthroughs like the Transformer are extremely rare. Many large model companies today are reusing Ollama’s open-source network structures. And the flow of technical secrets is accelerating—mainly through three channels: researchers publishing papers and sharing results, open-source release of model parameters, and the movement of core engineers between companies. With these three forces at play, today’s “technical secrets” quickly become tomorrow’s industry consensus, and technical moats collapse rapidly. The emergence of DeepSeek, for instance, has already knocked OpenAI off its pedestal.
Tensor.Art is a perfect example of this. Most of the model trainers on our platform don’t even know how to code. As long as they’re interested in AIGC, understand some basic principles, and can prepare their own data samples, they can train valuable models.
Take the German creator lykon, for example—he’s one of the top three AI open-source model contributors in the world, and he doesn’t even use the command line. I was shocked when I first helped him run training scripts. Yet he’s responsible for countless widely-used models. This proves that as long as someone has unique data and genuine passion, they too can contribute to the evolution of AI.
ZP: Are you concerned that Tensor.Art might become too reliant on open-source models? Will you eventually release your own models like Stability AI did?
Zhenyu Shen: Our position is very clear: we are committed to building infrastructure for open-source models and supporting the global development of the open-source ecosystem—we’re not looking to create our own models. We don’t want to be athletes; we want to be referees or sponsors.
That said, we’re actively sponsoring open-source models. For example, the popular foundational model illustrious in the industry was trained with the computing resources and funding we provided. I believe that more and more widely adopted open-source models will receive our sponsorship in the future. Additionally, our engineers are also actively contributing to the ecosystems surrounding these open-source models.
04 Hosting Capabilities, Global Reach, and Economies of Scale: Tensor.Art’s Core Competitive Strategies
ZP: In your view, what are the fundamental differences between ecosystems of domestic and international AI model (such as Liblib and Civitai)? And what is Tensor.Art’s differentiated advantange?
Zhenyu Shen: As a model platform, our core competitiveness revolves around hosting capabilities. So, what defines a great hosting platform?
First and foremost, it's about powerful inference capabilities. This is where we differ significantly from Civitai, which is quite weak in inference. When users choose to host their models on a platform, what they really want is for those models to perform well and provide a great user experience—so strong inference functionality is critical.
Second, there’s the performance and pricing of inference. Compute resources are expensive, and no matter how powerful your features are, if the cost is outrageously high, people won’t use them. That’s why we built our own data centers and manage a large number of GPUs in-house—so we can offer services that are powerful, fast, and affordable. When it comes to our competitive advantage over Civitai, our compute costs are five times cheaper. Whether it’s higher compute power for free users or lower prices for paid users, our cost-performance advantage is clear.
Third, it's about helping creators monetize their work. After all, no one can run purely on passion forever—creators naturally want to earn income from their model development to sustain future innovation. Tensor.Art has been actively exploring monetization models for creators, and we’ve tried several key approaches:
For exclusive models, we offer a revenue-sharing model based on usage, similar to the creator incentive mechanisms on platforms like TikTok, Bilibili, or Xiaohongshu. We encourage developers to create paid models and workflows, and we let them keep the majority of the revenue. Creators can list paid or subscription-only models directly on our platform and generate income. We’re also working hard to lower the barrier to entry for using these models. They can be easily packaged into H5 web pages or AI tools, making them easier to share and spread. And for users trying out these tools and effects for the first time, we even allow instant access without registration, further reducing the friction for creators to monetize their work.
ZP: In the long run, what is the moat for a model community?
Zhenyu Shen: As I mentioned earlier, technical barriers alone don’t constitute a sustainable moat. I believe the moat of a model community lies mainly in two dimensions: the scale of models and the scale of creators.
The first is model scale. Nowadays, content generation no longer relies on a single model—it often involves combinations of multiple models, with stacked LoRA layers or workflows that chain several models together. If even one required model isn’t available on your platform, the entire process breaks down. So having a large model library is critical, and there’s a strong network effect at play. At present, we’ve already surpassed Civitai in terms of model scale, with a larger base model ecosystem, more core creators, and a greater total number of models.
The second is the scale of content creators, which directly impacts monetization efficiency. The more creators there are, the higher the membership revenue, and the stronger the reverse incentive for model trainers. That’s because model trainers all want to be on the platform with the most traffic and the best distribution potential, which ultimately leads to more commercial success. So the number of content creators is absolutely vital to platform growth.
ZP: Why is Tensor.Art targeting the global market?
Zhenyu Shen: For a programmer, any open-source project they develop will almost certainly be published on GitHub, rather than a domestic tech community—because developers seek recognition from the global developer community and want their work to benefit as many people as possible worldwide. The same logic applies to AI model developers: everyone wants their models to be used by developers around the world, not just limited to China. That’s why, from day one, we’ve been very clear that our model platform must be built for the global AI developer community.
ZP: Will Tensor.Art consider expanding into formats like video or 3D?
Zhenyu Shen: The AI video space is undergoing major changes. In the past, video models like Runway, Pika, and OpenAI’s Sora were all closed-source. But this year, things are different—many companies, including Tencent’s Hunyuan, Alibaba’s Tongyi Wanxiang, and Zhipu’s CogVideo, have chosen to open-source their pretrained video models. You could say we’re now in a flourishing era of open-source video models.
That said, current video models still have limitations: the lack of a physics engine often leads to hallucinations, and generating long videos remains difficult. Most people lean toward image-to-video generation for now, as text-to-video has too large a semantic gap and often yields unsatisfactory results. In the short term, the focus is still on generating short clips—typically 6 to 10 seconds—mainly for non-serious or casual use cases.
But there’s actually huge commercial potential here. Over the past decade, image content has grown rapidly, largely because we’ve had a wide array of image editing tools. Today, the frequency of video use is still limited by the high cost of video production. If AI can significantly reduce the cost of creating video, the entire video industry could scale up to a new level of size and value.
That said, the biggest obstacle for AI video generation right now is the compute cost, since generating even a few hundred frames requires dozens or hundreds of times more computing power than generating images. That’s why we’re investing heavily in inference optimization and infrastructure.
Tensor.Art is currently the most capable video-generation model platform—we support the largest number of base models, offer the lowest inference costs, and even allow online training of video model LoRAs. While it’s still unclear which video model will ultimately win out, we’re committed to providing the best fine-tuning and online inference environment for all open-source video models.
ZP: What kind of business model will Tensor.Art adopt? Will you explore transaction-based commissions or enterprise-level services?
Zhenyu Shen: At this stage, Tensor.Art’s main users are designers. Similar to creative software like CapCut or Adobe, our revenue primarily comes from value-added services, meaning monthly subscription fees. Most of our users come from countries like the U.S., Japan, Brazil, Indonesia, and India. We’ve found that overseas users are significantly more willing to pay compared to users in China. In addition to basic membership subscriptions, they also purchase extra compute packages. In fact, half of our current revenue comes from those add-on compute purchases.
As a MaaS (Model as a Service) platform, we also offer API access to models and workflows for B2B clients. However, this is still in the co-development phase, and we’re not currently focused on monetizing the B2B business. Thanks to our in-house data center and inference optimization, our revenue already covers the cost of compute.
05 From Workflows to Agents: How AI Is Reshaping the Way We Create
ZP: With the recent rise of Deepseek and Manus, where do you think AI product formats are headed?
Zhenyu Shen: Personally, I’m skeptical that chatbots represent the final form of AI applications—and even today’s agents may not be it either. While the concept of agents sounds cool, users are still in the “novelty” phase; we haven’t yet seen many people significantly boost their productivity using agents. In reality, we don’t always need AI to autonomously complete an entire thought process and deliver a full result from start to finish. I guess that in the future, AI will become much more pervasive and always aware of my context during the creative process, providing timely feedback after each of my actions, and through continuous human-AI interaction, helping me complete something that truly reflects my own creativity.
ZP: Manus has been getting a lot of attention lately. In your opinion, where is its impact overestimated, and where is it underestimated?
Zhenyu Shen: Many people assume that the big model companies have already solved all the core problems, leaving little room or value for other companies. But I think that’s a flawed perspective—it’s not a reasonable division of labor.
Why? Because a single large model is inherently limited. It’s just a pretrained model, and when it comes to solving vertical or domain-specific problems, its effectiveness may only be around 50%. What we really need are deeply fine-tuned models to address these niche scenarios. That requires companies in various verticals to train models using their own first-party business data—whether through LoRA, checkpoints, or RAG. In the end, we still have to combine general-purpose and proprietary models through workflows or agents to truly solve complex problems.
This is where Manus really inspired us. They didn’t build a foundation model or do fine-tuning—instead, they innovated at the workflow level. In the past, we simply called large models directly, which often led to shallow results. But now, by generating workflows with AI, we’re seeing much deeper and more effective problem-solving.
This is very much aligned with our vision at Tensor.Art, where we’re also focused on AI workflows. We encourage creators to build workflows using existing models to tackle more complex tasks. For instance, those advanced video effects you see online? They’re not achieved with a single prompt—they’re the product of AI workflows.
These vertical problem-solving modules can be created by developers through open-source innovation, and then assembled using an orchestrator into complete workflows. That’s why we’re investing in building an open-source community and why we place so much emphasis on AI workflows.
06 Rapid Fire Q&A
ZP: What’s been the most “counterintuitive” decision you’ve made since starting your entrepreneurial journey?
Zhenyu Shen: I’ve made quite a few decisions that went against mainstream thinking. For example, betting on the collectibles market—many people saw it as a niche category with no real scale. Or in early 2023, when the whole market was obsessed with the idea that a single large model could solve every problem, I firmly believed that closed-source models weren’t a one-size-fits-all solution, and that the future belonged to open-source and fine-tuned models. Also, when everyone was hyping up AI applications and agents, we didn’t jump in. People's judgment is often swayed by market sentiment. In the face of that kind of hype, I tend to stay calm and trust my own instincts.
ZP: Which entrepreneur has influenced you the most?
Zhenyu Shen: Like many others, I deeply admire two entrepreneurs: Yiming Zhang and Jun Lei (founder, chairman, and CEO of Xiaomi). But I consider myself luckier than most, and I’ve had the opportunity to work with both of them. Yiming Zhang, in particular, has had a profound impact on me. The way I approach problems, validate products, design growth strategies, and shape organizational culture has all been deeply influenced by him. For many decisions I’m unsure about, I often find myself “following his playbook.”
My time at Xiaomi back in 2010 also taught me a lot. I witnessed firsthand how a team of technically strong engineers, driven by a mission to improve society, created products that impacted an entire generation. What touched me most about Xiaomi was their philosophy of technology for all—"enabling everyone to enjoy the benefits of technology." Even though they pursued cutting-edge innovation, they didn’t chase high profit margins. Instead, they priced their products close to cost, allowing more people to access quality tech, and relied on economies of scale to make the business work.
This philosophy has deeply influenced me. Both of my products, Islands and Tensor.Art, adopt pricing strategies that are close to cost. Investors often ask why we set prices so low, especially when users might be willing to pay more. But I believe affordable pricing leads to greater scale and encourages more people to use the product. That’s why Tensor.Art currently offers the lowest-priced inference services and compute resources on the market. Our $9.90 monthly membership delivers compute power and service quality that could be 10 times higher than competing platforms at the same price point.
ZP: ByteDance and Xiaomi both grew through economies of scale, one in software and the other in hardware, both targeting massive, high-ceiling consumer markets. But Tensor.Art is a relatively niche product. Does that model still apply?
Zhenyu Shen: That’s a great question. I believe AI will change everything over the next decade. While Tensor.Art currently serves mainly developers and designers, it will ultimately serve everyone. We may not yet see the full path to AI’s mass adoption, but the direction of open-source AI models is clear and irreversible.
Right now, we can’t serve general consumers directly because the underlying algorithms still aren’t powerful enough, and most consumer-facing AI demand is still centered on entertainment, with low willingness to pay. To be honest, we’re not even fully meeting the needs of designers yet. There’s still a lot of room for improvement in controllability and output quality in image generation. But what we’re doing is staying ready by closely monitoring opportunities to serve the broader consumer market when the time is right.
ZP: In your opinion, what is the most important “counter-instinctive” trait a CEO should have?
Zhenyu Shen: For me, it’s “making peace with your ego.” I spent a long time battling my ego, only to realize that completely getting rid of it is impossible. And honestly, ego isn’t necessarily a bad thing, but you have to recognize its presence, because it’s often the inner demon that clouds our judgment and the reason why even very smart people make poor decisions.
Disclaimer
Please note that the content of this interview has been edited and approved by Zhenyu Shen. It reflects the personal views of the interviewee. We encourage readers to engage by leaving comments and sharing their thoughts on this interview. For more information about Tensor.Art, please visit the official website at Tensor.Art.
Z Potentials will continue to feature interviews with entrepreneurs in fields such as artificial intelligence, robotics, and globalization. We warmly invite those of you who are hopeful for the future to join our community to share, learn, and grow with us.
Image source: Zhenyu Shen.


