原文链接: https://leetcode-cn.com/problems/find-duplicate-file-in-system
英文原文
Given a list paths
of directory info, including the directory path, and all the files with contents in this directory, return all the duplicate files in the file system in terms of their paths. You may return the answer in any order.
A group of duplicate files consists of at least two files that have the same content.
A single directory info string in the input list has the following format:
"root/d1/d2/.../dm f1.txt(f1_content) f2.txt(f2_content) ... fn.txt(fn_content)"
It means there are n
files (f1.txt, f2.txt ... fn.txt)
with content (f1_content, f2_content ... fn_content)
respectively in the directory "root/d1/d2/.../dm"
. Note that n >= 1
and m >= 0
. If m = 0
, it means the directory is just the root directory.
The output is a list of groups of duplicate file paths. For each group, it contains all the file paths of the files that have the same content. A file path is a string that has the following format:
"directory_path/file_name.txt"
Example 1:
Input: paths = ["root/a 1.txt(abcd) 2.txt(efgh)","root/c 3.txt(abcd)","root/c/d 4.txt(efgh)","root 4.txt(efgh)"] Output: [["root/a/2.txt","root/c/d/4.txt","root/4.txt"],["root/a/1.txt","root/c/3.txt"]]
Example 2:
Input: paths = ["root/a 1.txt(abcd) 2.txt(efgh)","root/c 3.txt(abcd)","root/c/d 4.txt(efgh)"] Output: [["root/a/2.txt","root/c/d/4.txt"],["root/a/1.txt","root/c/3.txt"]]
Constraints:
1 <= paths.length <= 2 * 104
1 <= paths[i].length <= 3000
1 <= sum(paths[i].length) <= 5 * 105
paths[i]
consist of English letters, digits,'/'
,'.'
,'('
,')'
, and' '
.- You may assume no files or directories share the same name in the same directory.
- You may assume each given directory info represents a unique directory. A single blank space separates the directory path and file info.
Follow up:
- Imagine you are given a real file system, how will you search files? DFS or BFS?
- If the file content is very large (GB level), how will you modify your solution?
- If you can only read the file by 1kb each time, how will you modify your solution?
- What is the time complexity of your modified solution? What is the most time-consuming part and memory-consuming part of it? How to optimize?
- How to make sure the duplicated files you find are not false positive?
中文题目
给定一个目录信息列表,包括目录路径,以及该目录中的所有包含内容的文件,您需要找到文件系统中的所有重复文件组的路径。一组重复的文件至少包括二个具有完全相同内容的文件。
输入列表中的单个目录信息字符串的格式如下:
"root/d1/d2/.../dm f1.txt(f1_content) f2.txt(f2_content) ... fn.txt(fn_content)"
这意味着有 n 个文件(f1.txt
, f2.txt
... fn.txt
的内容分别是 f1_content
, f2_content
... fn_content
)在目录 root/d1/d2/.../dm
下。注意:n>=1 且 m>=0。如果 m=0,则表示该目录是根目录。
该输出是重复文件路径组的列表。对于每个组,它包含具有相同内容的文件的所有文件路径。文件路径是具有下列格式的字符串:
"directory_path/file_name.txt"
示例 1:
输入: ["root/a 1.txt(abcd) 2.txt(efgh)", "root/c 3.txt(abcd)", "root/c/d 4.txt(efgh)", "root 4.txt(efgh)"] 输出: [["root/a/2.txt","root/c/d/4.txt","root/4.txt"],["root/a/1.txt","root/c/3.txt"]]
注:
- 最终输出不需要顺序。
- 您可以假设目录名、文件名和文件内容只有字母和数字,并且文件内容的长度在 [1,50] 的范围内。
- 给定的文件数量在 [1,20000] 个范围内。
- 您可以假设在同一目录中没有任何文件或目录共享相同的名称。
- 您可以假设每个给定的目录信息代表一个唯一的目录。目录路径和文件信息用一个空格分隔。
超越竞赛的后续行动:
- 假设您有一个真正的文件系统,您将如何搜索文件?广度搜索还是宽度搜索?
- 如果文件内容非常大(GB级别),您将如何修改您的解决方案?
- 如果每次只能读取 1 kb 的文件,您将如何修改解决方案?
- 修改后的解决方案的时间复杂度是多少?其中最耗时的部分和消耗内存的部分是什么?如何优化?
- 如何确保您发现的重复文件不是误报?
通过代码
官方题解
方法一:哈希表
首先我们通过字符串操作获取目录路径、文件名和文件内容。我们使用哈希映射(HashMap)来寻找重复文件,哈希映射中的键(key)是文件内容,值(value)是存储路径和文件名的列表。
我们遍历每一个文件,并把它加入哈希映射中。在这之后,我们遍历哈希映射,如果一个键对应的值列表的长度大于 1
,说明我们找到了重复文件,可以把这个列表加入到答案中。
<,,,,,,,>
public class Solution {
public List < List < String >> findDuplicate(String[] paths) {
HashMap < String, List < String >> map = new HashMap < > ();
for (String path: paths) {
String[] values = path.split(" ");
for (int i = 1; i < values.length; i++) {
String[] name_cont = values[i].split("\\(");
name_cont[1] = name_cont[1].replace(")", "");
List < String > list = map.getOrDefault(name_cont[1], new ArrayList < String > ());
list.add(values[0] + "/" + name_cont[0]);
map.put(name_cont[1], list);
}
}
List < List < String >> res = new ArrayList < > ();
for (String key: map.keySet()) {
if (map.get(key).size() > 1)
res.add(map.get(key));
}
return res;
}
}
复杂度分析
时间复杂度:$O(N)$,其中 $N$ 是文件的总数。这里认为每个文件名的长度是常数级别的。
空间复杂度:$O(N)$。
统计信息
通过次数 | 提交次数 | AC比率 |
---|---|---|
6853 | 14127 | 48.5% |
提交历史
提交时间 | 提交结果 | 执行时间 | 内存消耗 | 语言 |
---|