I think the following explanation can help you to better understand the HDFS architecture:
You can consider Name node to be like FAT (file allocation table) + Directory data and Data nodes to be dumb block devices. When you want to read the file from the regular file system, you should go to Directory, then go to FAT, get locations of all relevant blocks and read them. The same happens with HDFS. When you want to read the file, you go to the Namenode, get the list blocks the given file have. This information about blocks will contain list of datanodes where this information sitting. After it you go to the datanode and get relevant blocks from them.
Does information like last modified time, created time, file size, owner, permissions and etc stored in Namenode?
YES, the fsimage on the name node is in a binary format. Use the “Offline Image Viewer” to dump the fsimage in a human-readable format. The output of this tool can be further analyzed with pig or some other tool to get more meaningful data.
Does datanode store any metadata information?
no, apart from the blocks themselves
There is only one Namenode, can the metadata data exceed the server’s limit?
Yes, if you have many small files
If a user wants to download a file from Hadoop, does he have to download it from the Namenode? I found the below architecure picture from web, it shows a client can direct write data to datanode? Is it true?
No, the info about the file is on the Namenode, the file itself is on Datanodes (a datanode could in theory be on the same machine, and often is on smaller clusters)
<code>Read process : a) Client first gets the datanodes where the actual data is located from the namenode b) Then it directly contacts the datanodes to read the data Write process : a) Client asks namenode for some datanodes to write the data and if available Namenode gives them b)Client goes directly to the datanodes and write</code>