replace files with move when using pig on HDFS
0
I have a process which moves files to a processing folder with a pig script as MapReduce2 task in part of a hadoop workflow. I have seen copies fail recently resulting in a partial move of the files. When the job then next re-runs there is an error as the pig script will try to move the file file again but as there is a partial of the file at the target location it will fail. There is no option in pig to move with replace. I could do a copy and delete but the risk with this is that while the copy is in progress another file could be uploaded to HDFS what wasn't included in the origional copy operation and then when I run the delete all I also delete a file that has yet to be moved to the processing directory. I know there is no force replace on move but is there a way to create a list of all the files that