In this little training challenge, you are going to learn about the Robots_exclusion_standard.
The robots.txt file is used by web crawlers to check if they are allowed to crawl and index your website or only parts of it.
Sometimes these files reveal the directory structure instead protecting the content from being crawled.
Enjoy!
在这个小小的训练挑战中,您将学习机器人排除标准。
robots.txt文件被网络爬虫用来检查他们是否被允许对您的网站或仅对其部分进行爬网和索引。
有时,这些文件会显示目录结构,而不是保护内容不被爬网。
享受
在网址后根据提示输入robots.txt,跳转如下页面:

根据disallow,发现并访问fl0g.php

可以发现flag,输入即可得到正确答案