Array values reutilization in another array definition
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ height:90px;width:728px;box-sizing:border-box;
}
I am trying to perform an array definition in bash using values of another array as the name of the new array and assign the values inside the array from another array which is dynamically populated.
Here is an example of the code so far:
adduser() {
declare -i segment=$1
segment_length=${#segment[@]}
for (( a = 0; a < "${segment_length}"; a++ )); do
data=($(cat $filename | grep -w ${segment[a]} | awk -F ";" '{print $1}' | tr 'n' ' ' | sed 's/^/ /g'))
${segment[a]}=($(echo "${data[*]}"))
done
}
cat $filename | tail -n+2 | awk -F ";" '{print $2}' | awk '{ for (i=1; i<=NF; i++) print $i }' | sed 's/r//g' | sort -u > segments.txt
IFS=$'rn' GLOBIGNORE='*' command eval 'segments=($(cat segments.txt))'
for (( i = 0; i < ${#segments[@]}; i++ )); do
adduser ${segments[i]}
done
The objective is to dynamically populate arrays (data in one column of csv) with values from another column and then massively work on them.
The csv has the following format:
Header1;Header2
Value1;1 2 3
Value2;2 4 5
Take for example the value 2 from column Header2.
The objective is to dynamically create the array 2 with values Value1 and Value2:
2=( Value1 Value2 )
Testing the two answers provided:
I will continue here as comments are too short: Here is the result from the answer with a random file (same format as the example):
ivo@spain-nuc-03:~/Downloads/TestStackoverflow$ awk -F'[; ]' '/;[0-9] / { for (i=2; i<=NF; i++) printf "%s ", $1 > $i".txt" }' real.csv
ivo@spain-nuc-03:~/Downloads/TestStackoverflow$ ll
total 68
drwxr-xr-x 2 ivo ivo 4096 Nov 27 17:16 ./
drwxr-xr-x 28 ivo ivo 32768 Nov 27 17:15 ../
-rw-rw-r-- 1 ivo ivo 99 Nov 27 17:16 155.txt
-rw-rw-r-- 1 ivo ivo 132 Nov 27 17:16 155?.txt
-rw-rw-r-- 1 ivo ivo 99 Nov 27 17:16 2.txt
-rw-rw-r-- 1 ivo ivo 66 Nov 27 17:16 2?.txt
-rw-rw-r-- 1 ivo ivo 198 Nov 27 17:16 3.txt
-rw-rw-r-- 1 ivo ivo 33 Nov 27 17:16 3?.txt
-rw-r--r-- 1 ivo ivo 1369 Nov 27 17:14 real.csv
While with the answer above you get the following:
ivo@spain-nuc-03:~/Downloads/TestStackoverflow$ ./processing.sh real.csv
ivo@spain-nuc-03:~/Downloads/TestStackoverflow$ ll
total 112
drwxr-xr-x 2 ivo ivo 4096 Nov 27 17:25 ./
drwxr-xr-x 28 ivo ivo 32768 Nov 27 17:15 ../
-rw-rw-r-- 1 ivo ivo 100 Nov 27 17:25 102.txt
-rw-rw-r-- 1 ivo ivo 100 Nov 27 17:25 105.txt
-rw-rw-r-- 1 ivo ivo 67 Nov 27 17:25 106.txt
-rw-rw-r-- 1 ivo ivo 34 Nov 27 17:25 112.txt
-rw-rw-r-- 1 ivo ivo 991 Nov 27 17:25 155.txt
-rw-rw-r-- 1 ivo ivo 694 Nov 27 17:25 2.txt
-rw-rw-r-- 1 ivo ivo 859 Nov 27 17:25 3.txt
-rw-rw-r-- 1 ivo ivo 67 Nov 27 17:25 51.txt
-rw-rw-r-- 1 ivo ivo 67 Nov 27 17:25 58.txt
-rw-rw-r-- 1 ivo ivo 34 Nov 27 17:25 59.txt
-rw-rw-r-- 1 ivo ivo 34 Nov 27 17:25 65.txt
-rw-rw-r-- 1 ivo ivo 34 Nov 27 17:25 67.txt
-rw-rw-r-- 1 ivo ivo 34 Nov 27 17:25 72.txt
-rw-rw-r-- 1 ivo ivo 34 Nov 27 17:25 78.txt
-rw-rw-r-- 1 ivo ivo 34 Nov 27 17:25 81.txt
-rw-rw-r-- 1 ivo ivo 34 Nov 27 17:25 82.txt
-rwxrwxr-x 1 ivo ivo 1180 Nov 27 17:25 processing.sh*
-rw-r--r-- 1 ivo ivo 1369 Nov 27 17:14 real.csv
ivo@spain-nuc-03:~/Downloads/TestStackoverflow$
Thanks to everyone for their participation!
arrays bash dynamic-arrays
|
show 6 more comments
I am trying to perform an array definition in bash using values of another array as the name of the new array and assign the values inside the array from another array which is dynamically populated.
Here is an example of the code so far:
adduser() {
declare -i segment=$1
segment_length=${#segment[@]}
for (( a = 0; a < "${segment_length}"; a++ )); do
data=($(cat $filename | grep -w ${segment[a]} | awk -F ";" '{print $1}' | tr 'n' ' ' | sed 's/^/ /g'))
${segment[a]}=($(echo "${data[*]}"))
done
}
cat $filename | tail -n+2 | awk -F ";" '{print $2}' | awk '{ for (i=1; i<=NF; i++) print $i }' | sed 's/r//g' | sort -u > segments.txt
IFS=$'rn' GLOBIGNORE='*' command eval 'segments=($(cat segments.txt))'
for (( i = 0; i < ${#segments[@]}; i++ )); do
adduser ${segments[i]}
done
The objective is to dynamically populate arrays (data in one column of csv) with values from another column and then massively work on them.
The csv has the following format:
Header1;Header2
Value1;1 2 3
Value2;2 4 5
Take for example the value 2 from column Header2.
The objective is to dynamically create the array 2 with values Value1 and Value2:
2=( Value1 Value2 )
Testing the two answers provided:
I will continue here as comments are too short: Here is the result from the answer with a random file (same format as the example):
ivo@spain-nuc-03:~/Downloads/TestStackoverflow$ awk -F'[; ]' '/;[0-9] / { for (i=2; i<=NF; i++) printf "%s ", $1 > $i".txt" }' real.csv
ivo@spain-nuc-03:~/Downloads/TestStackoverflow$ ll
total 68
drwxr-xr-x 2 ivo ivo 4096 Nov 27 17:16 ./
drwxr-xr-x 28 ivo ivo 32768 Nov 27 17:15 ../
-rw-rw-r-- 1 ivo ivo 99 Nov 27 17:16 155.txt
-rw-rw-r-- 1 ivo ivo 132 Nov 27 17:16 155?.txt
-rw-rw-r-- 1 ivo ivo 99 Nov 27 17:16 2.txt
-rw-rw-r-- 1 ivo ivo 66 Nov 27 17:16 2?.txt
-rw-rw-r-- 1 ivo ivo 198 Nov 27 17:16 3.txt
-rw-rw-r-- 1 ivo ivo 33 Nov 27 17:16 3?.txt
-rw-r--r-- 1 ivo ivo 1369 Nov 27 17:14 real.csv
While with the answer above you get the following:
ivo@spain-nuc-03:~/Downloads/TestStackoverflow$ ./processing.sh real.csv
ivo@spain-nuc-03:~/Downloads/TestStackoverflow$ ll
total 112
drwxr-xr-x 2 ivo ivo 4096 Nov 27 17:25 ./
drwxr-xr-x 28 ivo ivo 32768 Nov 27 17:15 ../
-rw-rw-r-- 1 ivo ivo 100 Nov 27 17:25 102.txt
-rw-rw-r-- 1 ivo ivo 100 Nov 27 17:25 105.txt
-rw-rw-r-- 1 ivo ivo 67 Nov 27 17:25 106.txt
-rw-rw-r-- 1 ivo ivo 34 Nov 27 17:25 112.txt
-rw-rw-r-- 1 ivo ivo 991 Nov 27 17:25 155.txt
-rw-rw-r-- 1 ivo ivo 694 Nov 27 17:25 2.txt
-rw-rw-r-- 1 ivo ivo 859 Nov 27 17:25 3.txt
-rw-rw-r-- 1 ivo ivo 67 Nov 27 17:25 51.txt
-rw-rw-r-- 1 ivo ivo 67 Nov 27 17:25 58.txt
-rw-rw-r-- 1 ivo ivo 34 Nov 27 17:25 59.txt
-rw-rw-r-- 1 ivo ivo 34 Nov 27 17:25 65.txt
-rw-rw-r-- 1 ivo ivo 34 Nov 27 17:25 67.txt
-rw-rw-r-- 1 ivo ivo 34 Nov 27 17:25 72.txt
-rw-rw-r-- 1 ivo ivo 34 Nov 27 17:25 78.txt
-rw-rw-r-- 1 ivo ivo 34 Nov 27 17:25 81.txt
-rw-rw-r-- 1 ivo ivo 34 Nov 27 17:25 82.txt
-rwxrwxr-x 1 ivo ivo 1180 Nov 27 17:25 processing.sh*
-rw-r--r-- 1 ivo ivo 1369 Nov 27 17:14 real.csv
ivo@spain-nuc-03:~/Downloads/TestStackoverflow$
Thanks to everyone for their participation!
arrays bash dynamic-arrays
Time to switch to a real programming language.
– choroba
Nov 26 '18 at 18:43
Very funny! Please try to be helpful...
– Ivo Yordanov
Nov 26 '18 at 19:30
1
You are more likely to get help if you boil down your problem. And I don't get whatHeader2
from the second column has to do with the values from the first column. Also,2
is not a valid name for a variable.
– Socowi
Nov 26 '18 at 19:44
I'm not trying to be funny. Every time my shell script is longer than 10 lines or needs real variables and data structures, I rewrite it in Perl.
– choroba
Nov 26 '18 at 19:46
@Socowi it does not on the example file... It merely is a header, nothing more... 2 is an example...
– Ivo Yordanov
Nov 26 '18 at 19:54
|
show 6 more comments
I am trying to perform an array definition in bash using values of another array as the name of the new array and assign the values inside the array from another array which is dynamically populated.
Here is an example of the code so far:
adduser() {
declare -i segment=$1
segment_length=${#segment[@]}
for (( a = 0; a < "${segment_length}"; a++ )); do
data=($(cat $filename | grep -w ${segment[a]} | awk -F ";" '{print $1}' | tr 'n' ' ' | sed 's/^/ /g'))
${segment[a]}=($(echo "${data[*]}"))
done
}
cat $filename | tail -n+2 | awk -F ";" '{print $2}' | awk '{ for (i=1; i<=NF; i++) print $i }' | sed 's/r//g' | sort -u > segments.txt
IFS=$'rn' GLOBIGNORE='*' command eval 'segments=($(cat segments.txt))'
for (( i = 0; i < ${#segments[@]}; i++ )); do
adduser ${segments[i]}
done
The objective is to dynamically populate arrays (data in one column of csv) with values from another column and then massively work on them.
The csv has the following format:
Header1;Header2
Value1;1 2 3
Value2;2 4 5
Take for example the value 2 from column Header2.
The objective is to dynamically create the array 2 with values Value1 and Value2:
2=( Value1 Value2 )
Testing the two answers provided:
I will continue here as comments are too short: Here is the result from the answer with a random file (same format as the example):
ivo@spain-nuc-03:~/Downloads/TestStackoverflow$ awk -F'[; ]' '/;[0-9] / { for (i=2; i<=NF; i++) printf "%s ", $1 > $i".txt" }' real.csv
ivo@spain-nuc-03:~/Downloads/TestStackoverflow$ ll
total 68
drwxr-xr-x 2 ivo ivo 4096 Nov 27 17:16 ./
drwxr-xr-x 28 ivo ivo 32768 Nov 27 17:15 ../
-rw-rw-r-- 1 ivo ivo 99 Nov 27 17:16 155.txt
-rw-rw-r-- 1 ivo ivo 132 Nov 27 17:16 155?.txt
-rw-rw-r-- 1 ivo ivo 99 Nov 27 17:16 2.txt
-rw-rw-r-- 1 ivo ivo 66 Nov 27 17:16 2?.txt
-rw-rw-r-- 1 ivo ivo 198 Nov 27 17:16 3.txt
-rw-rw-r-- 1 ivo ivo 33 Nov 27 17:16 3?.txt
-rw-r--r-- 1 ivo ivo 1369 Nov 27 17:14 real.csv
While with the answer above you get the following:
ivo@spain-nuc-03:~/Downloads/TestStackoverflow$ ./processing.sh real.csv
ivo@spain-nuc-03:~/Downloads/TestStackoverflow$ ll
total 112
drwxr-xr-x 2 ivo ivo 4096 Nov 27 17:25 ./
drwxr-xr-x 28 ivo ivo 32768 Nov 27 17:15 ../
-rw-rw-r-- 1 ivo ivo 100 Nov 27 17:25 102.txt
-rw-rw-r-- 1 ivo ivo 100 Nov 27 17:25 105.txt
-rw-rw-r-- 1 ivo ivo 67 Nov 27 17:25 106.txt
-rw-rw-r-- 1 ivo ivo 34 Nov 27 17:25 112.txt
-rw-rw-r-- 1 ivo ivo 991 Nov 27 17:25 155.txt
-rw-rw-r-- 1 ivo ivo 694 Nov 27 17:25 2.txt
-rw-rw-r-- 1 ivo ivo 859 Nov 27 17:25 3.txt
-rw-rw-r-- 1 ivo ivo 67 Nov 27 17:25 51.txt
-rw-rw-r-- 1 ivo ivo 67 Nov 27 17:25 58.txt
-rw-rw-r-- 1 ivo ivo 34 Nov 27 17:25 59.txt
-rw-rw-r-- 1 ivo ivo 34 Nov 27 17:25 65.txt
-rw-rw-r-- 1 ivo ivo 34 Nov 27 17:25 67.txt
-rw-rw-r-- 1 ivo ivo 34 Nov 27 17:25 72.txt
-rw-rw-r-- 1 ivo ivo 34 Nov 27 17:25 78.txt
-rw-rw-r-- 1 ivo ivo 34 Nov 27 17:25 81.txt
-rw-rw-r-- 1 ivo ivo 34 Nov 27 17:25 82.txt
-rwxrwxr-x 1 ivo ivo 1180 Nov 27 17:25 processing.sh*
-rw-r--r-- 1 ivo ivo 1369 Nov 27 17:14 real.csv
ivo@spain-nuc-03:~/Downloads/TestStackoverflow$
Thanks to everyone for their participation!
arrays bash dynamic-arrays
I am trying to perform an array definition in bash using values of another array as the name of the new array and assign the values inside the array from another array which is dynamically populated.
Here is an example of the code so far:
adduser() {
declare -i segment=$1
segment_length=${#segment[@]}
for (( a = 0; a < "${segment_length}"; a++ )); do
data=($(cat $filename | grep -w ${segment[a]} | awk -F ";" '{print $1}' | tr 'n' ' ' | sed 's/^/ /g'))
${segment[a]}=($(echo "${data[*]}"))
done
}
cat $filename | tail -n+2 | awk -F ";" '{print $2}' | awk '{ for (i=1; i<=NF; i++) print $i }' | sed 's/r//g' | sort -u > segments.txt
IFS=$'rn' GLOBIGNORE='*' command eval 'segments=($(cat segments.txt))'
for (( i = 0; i < ${#segments[@]}; i++ )); do
adduser ${segments[i]}
done
The objective is to dynamically populate arrays (data in one column of csv) with values from another column and then massively work on them.
The csv has the following format:
Header1;Header2
Value1;1 2 3
Value2;2 4 5
Take for example the value 2 from column Header2.
The objective is to dynamically create the array 2 with values Value1 and Value2:
2=( Value1 Value2 )
Testing the two answers provided:
I will continue here as comments are too short: Here is the result from the answer with a random file (same format as the example):
ivo@spain-nuc-03:~/Downloads/TestStackoverflow$ awk -F'[; ]' '/;[0-9] / { for (i=2; i<=NF; i++) printf "%s ", $1 > $i".txt" }' real.csv
ivo@spain-nuc-03:~/Downloads/TestStackoverflow$ ll
total 68
drwxr-xr-x 2 ivo ivo 4096 Nov 27 17:16 ./
drwxr-xr-x 28 ivo ivo 32768 Nov 27 17:15 ../
-rw-rw-r-- 1 ivo ivo 99 Nov 27 17:16 155.txt
-rw-rw-r-- 1 ivo ivo 132 Nov 27 17:16 155?.txt
-rw-rw-r-- 1 ivo ivo 99 Nov 27 17:16 2.txt
-rw-rw-r-- 1 ivo ivo 66 Nov 27 17:16 2?.txt
-rw-rw-r-- 1 ivo ivo 198 Nov 27 17:16 3.txt
-rw-rw-r-- 1 ivo ivo 33 Nov 27 17:16 3?.txt
-rw-r--r-- 1 ivo ivo 1369 Nov 27 17:14 real.csv
While with the answer above you get the following:
ivo@spain-nuc-03:~/Downloads/TestStackoverflow$ ./processing.sh real.csv
ivo@spain-nuc-03:~/Downloads/TestStackoverflow$ ll
total 112
drwxr-xr-x 2 ivo ivo 4096 Nov 27 17:25 ./
drwxr-xr-x 28 ivo ivo 32768 Nov 27 17:15 ../
-rw-rw-r-- 1 ivo ivo 100 Nov 27 17:25 102.txt
-rw-rw-r-- 1 ivo ivo 100 Nov 27 17:25 105.txt
-rw-rw-r-- 1 ivo ivo 67 Nov 27 17:25 106.txt
-rw-rw-r-- 1 ivo ivo 34 Nov 27 17:25 112.txt
-rw-rw-r-- 1 ivo ivo 991 Nov 27 17:25 155.txt
-rw-rw-r-- 1 ivo ivo 694 Nov 27 17:25 2.txt
-rw-rw-r-- 1 ivo ivo 859 Nov 27 17:25 3.txt
-rw-rw-r-- 1 ivo ivo 67 Nov 27 17:25 51.txt
-rw-rw-r-- 1 ivo ivo 67 Nov 27 17:25 58.txt
-rw-rw-r-- 1 ivo ivo 34 Nov 27 17:25 59.txt
-rw-rw-r-- 1 ivo ivo 34 Nov 27 17:25 65.txt
-rw-rw-r-- 1 ivo ivo 34 Nov 27 17:25 67.txt
-rw-rw-r-- 1 ivo ivo 34 Nov 27 17:25 72.txt
-rw-rw-r-- 1 ivo ivo 34 Nov 27 17:25 78.txt
-rw-rw-r-- 1 ivo ivo 34 Nov 27 17:25 81.txt
-rw-rw-r-- 1 ivo ivo 34 Nov 27 17:25 82.txt
-rwxrwxr-x 1 ivo ivo 1180 Nov 27 17:25 processing.sh*
-rw-r--r-- 1 ivo ivo 1369 Nov 27 17:14 real.csv
ivo@spain-nuc-03:~/Downloads/TestStackoverflow$
Thanks to everyone for their participation!
arrays bash dynamic-arrays
arrays bash dynamic-arrays
edited Nov 27 '18 at 17:07
Ivo Yordanov
asked Nov 26 '18 at 18:31
Ivo YordanovIvo Yordanov
848
848
Time to switch to a real programming language.
– choroba
Nov 26 '18 at 18:43
Very funny! Please try to be helpful...
– Ivo Yordanov
Nov 26 '18 at 19:30
1
You are more likely to get help if you boil down your problem. And I don't get whatHeader2
from the second column has to do with the values from the first column. Also,2
is not a valid name for a variable.
– Socowi
Nov 26 '18 at 19:44
I'm not trying to be funny. Every time my shell script is longer than 10 lines or needs real variables and data structures, I rewrite it in Perl.
– choroba
Nov 26 '18 at 19:46
@Socowi it does not on the example file... It merely is a header, nothing more... 2 is an example...
– Ivo Yordanov
Nov 26 '18 at 19:54
|
show 6 more comments
Time to switch to a real programming language.
– choroba
Nov 26 '18 at 18:43
Very funny! Please try to be helpful...
– Ivo Yordanov
Nov 26 '18 at 19:30
1
You are more likely to get help if you boil down your problem. And I don't get whatHeader2
from the second column has to do with the values from the first column. Also,2
is not a valid name for a variable.
– Socowi
Nov 26 '18 at 19:44
I'm not trying to be funny. Every time my shell script is longer than 10 lines or needs real variables and data structures, I rewrite it in Perl.
– choroba
Nov 26 '18 at 19:46
@Socowi it does not on the example file... It merely is a header, nothing more... 2 is an example...
– Ivo Yordanov
Nov 26 '18 at 19:54
Time to switch to a real programming language.
– choroba
Nov 26 '18 at 18:43
Time to switch to a real programming language.
– choroba
Nov 26 '18 at 18:43
Very funny! Please try to be helpful...
– Ivo Yordanov
Nov 26 '18 at 19:30
Very funny! Please try to be helpful...
– Ivo Yordanov
Nov 26 '18 at 19:30
1
1
You are more likely to get help if you boil down your problem. And I don't get what
Header2
from the second column has to do with the values from the first column. Also, 2
is not a valid name for a variable.– Socowi
Nov 26 '18 at 19:44
You are more likely to get help if you boil down your problem. And I don't get what
Header2
from the second column has to do with the values from the first column. Also, 2
is not a valid name for a variable.– Socowi
Nov 26 '18 at 19:44
I'm not trying to be funny. Every time my shell script is longer than 10 lines or needs real variables and data structures, I rewrite it in Perl.
– choroba
Nov 26 '18 at 19:46
I'm not trying to be funny. Every time my shell script is longer than 10 lines or needs real variables and data structures, I rewrite it in Perl.
– choroba
Nov 26 '18 at 19:46
@Socowi it does not on the example file... It merely is a header, nothing more... 2 is an example...
– Ivo Yordanov
Nov 26 '18 at 19:54
@Socowi it does not on the example file... It merely is a header, nothing more... 2 is an example...
– Ivo Yordanov
Nov 26 '18 at 19:54
|
show 6 more comments
3 Answers
3
active
oldest
votes
All-bash using array -
declare -a set=() # set is an array
while IFS=';' read -r key lst # read each line into these 2 splitting on semicolons
do [[ $key =~ Header* ]] && continue # ignore the header
read -a val <<< "$lst" # split the list of values to the right of the semicolon into an array
for e in "${val[@]}" # for each of those
do case "${set[e]:-}" in
*$key*) : already here, no-op ;; # ignore if already present
'') set[e]="$key" ;; # set initial if empty
*) set[e]="${set[e]} $key" ;; # add delimited if new
esac
done
done < csv # reads directly from the CSV file
At this point the sets should be loaded as space-delimited strings into each element of set
, indexed by the values on the line in the second column of the csv. To print them out for verification -
for n in "${!set[@]}"
do echo "$n: ${set[n]}"
done
Executed on the test csv content provided:
1: Value1
2: Value1 Value2
3: Value1
4: Value2
5: Value2
so set[4]
is Value2
, and set[2]
is Value1 Value2
. You can pull them from there to do whatever is needed.
No need for cat/tail/awk/grep/tr/sed/sort chains.
Does it need something more?
Good stuff. This was what I was looking for... Now Just inject into Database... Thank you for your time!
– Ivo Yordanov
Nov 27 '18 at 18:49
It seems there is a syntaxis problem when I execute it: ")syntax error: invalid arithmetic operator (error token is " 1: Value1 2: Value1
– Ivo Yordanov
Nov 27 '18 at 19:00
Can you check if it is not missing something from the one executed? Script is exactly the same. I am executing on Ubuntu 16.04 bash version 4.3.48
– Ivo Yordanov
Nov 27 '18 at 19:04
Add this to your script please:trap ' echo "ERROR $? at $0:$LINENO - [$BASH_COMMAND]" ' err
- it looks like you are trying to execute my output. (You likely don't need the printout at the bottom. I only put that in for you to see the alignment.)
– Paul Hodges
Nov 27 '18 at 19:15
You might also want to adjust the variable names. I'll add some commentary to make it easier.
– Paul Hodges
Nov 27 '18 at 19:15
|
show 2 more comments
Finally I went for a simpler method according to the comments I got to simplify:
cat $1 | tail -n+2 | awk -F ";" '{print $2}' | awk '{ for (i=1; i<=NF; i++) print $i }' | sed 's/r//g' | sort -u > segments.txt
IFS=$'rn' GLOBIGNORE='*' command eval 'segments=($(cat segments.txt))'
for (( i = 0; i < ${#segments[@]}; i++ )); do
cat $1 | grep -w ${segments[i]} | awk -F ";" '{print $1}' | tr 'n' ' ' | sed 's/^/ /g' > ${segments[i]}.txt
done
rm segments.txt
And then just process the remaining txt files in the folder.
I would really like to see this done in the way I was going initially though as it is more suitable for more data...
This apparently generates a single column of unique values from the second field of the file, which is your example is the digits from 1 to 5 - then whole-wordgrep
s those digits back from the CSV to put the first column value into files named <digit>.txt for each digit on the line. Am I reading this right?
– Paul Hodges
Nov 27 '18 at 15:16
This logic will overwrite "Value1" in 2.txt with "Value2" instead of listing both. Is that the result you wanted? Or would you prefer both be in the file? Did you mean for them to be space-delimited on one line? And did you mean for the lines to have newline endings, or explicitly not?
– Paul Hodges
Nov 27 '18 at 15:51
@PaulHodges The values from column1 should be ordered into sets. Each set has the name of the numerical value from column2. Only the values from column1 are important to appear in the final result... You get sets in the end... Imagine each set as the independent file generated...
– Ivo Yordanov
Nov 27 '18 at 16:08
Both sets have 2's in them, and you are writing a file for each number in the set, so there is a 2 in Value1 and Value2, but you are doing a truncating write. I made it an appending write. You were converting newlines to spaces, so I made both values write out with space-delimiting on the same line of the file. Please see my solution. If you prefer newlines, it's easy to make the values one per line. If this isn't what you meant, then you need to explain more clearly, with examples of what the output should be.
– Paul Hodges
Nov 27 '18 at 16:14
@PaulHodges I did look at it... Please look at the answer I posted as comments are too short.
– Ivo Yordanov
Nov 27 '18 at 16:30
add a comment |
If I'm reading it right, and assuming you are ok with having no proper newline on the files since you were explicitly squashing them out, this should do all the above in one awk
call.
awk -F'[; ]' '/;[0-9] / { for (i=2; i<=NF; i++) printf "%s ", $1 > $i".txt" }' yourInputFile
Yeah, I stick with the array formatting from previously... Can also go with the new line, depends on what to do next with the sets. The solution works for the example but not with a random input file respecting the same format as the example.
– Ivo Yordanov
Nov 27 '18 at 16:18
Then you need to provide specs and examples that demonstrate the problem. We can only work with what's given.
– Paul Hodges
Nov 27 '18 at 16:24
My purpose is just to get to the sets construct... Ideally, it should be done with arrays rather than with text files and keep the layer of abstraction, but I guess I need to use something else than bash for that...
– Ivo Yordanov
Nov 27 '18 at 16:34
Abstraction is fine - is very good, in fact - but if you callawk
twice in a pipeline, can you specify why you didn't just handle both tasks in one call? It's like the useless use of cat. Don't add more processes and complicate the line for maintainers. Please take some time to define your needs and explain them. We're here to help, but we need data.
– Paul Hodges
Nov 27 '18 at 16:46
Could have used sed or cut in place of one of the awk's... Maybe I should remove everything (rest of the script included) and just write an awk script because it can be done with awk one-liner (really long line) according to you? If someone would maintain it other than me would get lost. Extra clarity is important. Most people don't know how to code in awk, me included. I expect this to be maintained by people who don't know bash well, not to mention awk...
– Ivo Yordanov
Nov 27 '18 at 17:05
|
show 1 more comment
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53487056%2farray-values-reutilization-in-another-array-definition%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
All-bash using array -
declare -a set=() # set is an array
while IFS=';' read -r key lst # read each line into these 2 splitting on semicolons
do [[ $key =~ Header* ]] && continue # ignore the header
read -a val <<< "$lst" # split the list of values to the right of the semicolon into an array
for e in "${val[@]}" # for each of those
do case "${set[e]:-}" in
*$key*) : already here, no-op ;; # ignore if already present
'') set[e]="$key" ;; # set initial if empty
*) set[e]="${set[e]} $key" ;; # add delimited if new
esac
done
done < csv # reads directly from the CSV file
At this point the sets should be loaded as space-delimited strings into each element of set
, indexed by the values on the line in the second column of the csv. To print them out for verification -
for n in "${!set[@]}"
do echo "$n: ${set[n]}"
done
Executed on the test csv content provided:
1: Value1
2: Value1 Value2
3: Value1
4: Value2
5: Value2
so set[4]
is Value2
, and set[2]
is Value1 Value2
. You can pull them from there to do whatever is needed.
No need for cat/tail/awk/grep/tr/sed/sort chains.
Does it need something more?
Good stuff. This was what I was looking for... Now Just inject into Database... Thank you for your time!
– Ivo Yordanov
Nov 27 '18 at 18:49
It seems there is a syntaxis problem when I execute it: ")syntax error: invalid arithmetic operator (error token is " 1: Value1 2: Value1
– Ivo Yordanov
Nov 27 '18 at 19:00
Can you check if it is not missing something from the one executed? Script is exactly the same. I am executing on Ubuntu 16.04 bash version 4.3.48
– Ivo Yordanov
Nov 27 '18 at 19:04
Add this to your script please:trap ' echo "ERROR $? at $0:$LINENO - [$BASH_COMMAND]" ' err
- it looks like you are trying to execute my output. (You likely don't need the printout at the bottom. I only put that in for you to see the alignment.)
– Paul Hodges
Nov 27 '18 at 19:15
You might also want to adjust the variable names. I'll add some commentary to make it easier.
– Paul Hodges
Nov 27 '18 at 19:15
|
show 2 more comments
All-bash using array -
declare -a set=() # set is an array
while IFS=';' read -r key lst # read each line into these 2 splitting on semicolons
do [[ $key =~ Header* ]] && continue # ignore the header
read -a val <<< "$lst" # split the list of values to the right of the semicolon into an array
for e in "${val[@]}" # for each of those
do case "${set[e]:-}" in
*$key*) : already here, no-op ;; # ignore if already present
'') set[e]="$key" ;; # set initial if empty
*) set[e]="${set[e]} $key" ;; # add delimited if new
esac
done
done < csv # reads directly from the CSV file
At this point the sets should be loaded as space-delimited strings into each element of set
, indexed by the values on the line in the second column of the csv. To print them out for verification -
for n in "${!set[@]}"
do echo "$n: ${set[n]}"
done
Executed on the test csv content provided:
1: Value1
2: Value1 Value2
3: Value1
4: Value2
5: Value2
so set[4]
is Value2
, and set[2]
is Value1 Value2
. You can pull them from there to do whatever is needed.
No need for cat/tail/awk/grep/tr/sed/sort chains.
Does it need something more?
Good stuff. This was what I was looking for... Now Just inject into Database... Thank you for your time!
– Ivo Yordanov
Nov 27 '18 at 18:49
It seems there is a syntaxis problem when I execute it: ")syntax error: invalid arithmetic operator (error token is " 1: Value1 2: Value1
– Ivo Yordanov
Nov 27 '18 at 19:00
Can you check if it is not missing something from the one executed? Script is exactly the same. I am executing on Ubuntu 16.04 bash version 4.3.48
– Ivo Yordanov
Nov 27 '18 at 19:04
Add this to your script please:trap ' echo "ERROR $? at $0:$LINENO - [$BASH_COMMAND]" ' err
- it looks like you are trying to execute my output. (You likely don't need the printout at the bottom. I only put that in for you to see the alignment.)
– Paul Hodges
Nov 27 '18 at 19:15
You might also want to adjust the variable names. I'll add some commentary to make it easier.
– Paul Hodges
Nov 27 '18 at 19:15
|
show 2 more comments
All-bash using array -
declare -a set=() # set is an array
while IFS=';' read -r key lst # read each line into these 2 splitting on semicolons
do [[ $key =~ Header* ]] && continue # ignore the header
read -a val <<< "$lst" # split the list of values to the right of the semicolon into an array
for e in "${val[@]}" # for each of those
do case "${set[e]:-}" in
*$key*) : already here, no-op ;; # ignore if already present
'') set[e]="$key" ;; # set initial if empty
*) set[e]="${set[e]} $key" ;; # add delimited if new
esac
done
done < csv # reads directly from the CSV file
At this point the sets should be loaded as space-delimited strings into each element of set
, indexed by the values on the line in the second column of the csv. To print them out for verification -
for n in "${!set[@]}"
do echo "$n: ${set[n]}"
done
Executed on the test csv content provided:
1: Value1
2: Value1 Value2
3: Value1
4: Value2
5: Value2
so set[4]
is Value2
, and set[2]
is Value1 Value2
. You can pull them from there to do whatever is needed.
No need for cat/tail/awk/grep/tr/sed/sort chains.
Does it need something more?
All-bash using array -
declare -a set=() # set is an array
while IFS=';' read -r key lst # read each line into these 2 splitting on semicolons
do [[ $key =~ Header* ]] && continue # ignore the header
read -a val <<< "$lst" # split the list of values to the right of the semicolon into an array
for e in "${val[@]}" # for each of those
do case "${set[e]:-}" in
*$key*) : already here, no-op ;; # ignore if already present
'') set[e]="$key" ;; # set initial if empty
*) set[e]="${set[e]} $key" ;; # add delimited if new
esac
done
done < csv # reads directly from the CSV file
At this point the sets should be loaded as space-delimited strings into each element of set
, indexed by the values on the line in the second column of the csv. To print them out for verification -
for n in "${!set[@]}"
do echo "$n: ${set[n]}"
done
Executed on the test csv content provided:
1: Value1
2: Value1 Value2
3: Value1
4: Value2
5: Value2
so set[4]
is Value2
, and set[2]
is Value1 Value2
. You can pull them from there to do whatever is needed.
No need for cat/tail/awk/grep/tr/sed/sort chains.
Does it need something more?
edited Nov 27 '18 at 19:21
answered Nov 27 '18 at 18:30
Paul HodgesPaul Hodges
3,9361524
3,9361524
Good stuff. This was what I was looking for... Now Just inject into Database... Thank you for your time!
– Ivo Yordanov
Nov 27 '18 at 18:49
It seems there is a syntaxis problem when I execute it: ")syntax error: invalid arithmetic operator (error token is " 1: Value1 2: Value1
– Ivo Yordanov
Nov 27 '18 at 19:00
Can you check if it is not missing something from the one executed? Script is exactly the same. I am executing on Ubuntu 16.04 bash version 4.3.48
– Ivo Yordanov
Nov 27 '18 at 19:04
Add this to your script please:trap ' echo "ERROR $? at $0:$LINENO - [$BASH_COMMAND]" ' err
- it looks like you are trying to execute my output. (You likely don't need the printout at the bottom. I only put that in for you to see the alignment.)
– Paul Hodges
Nov 27 '18 at 19:15
You might also want to adjust the variable names. I'll add some commentary to make it easier.
– Paul Hodges
Nov 27 '18 at 19:15
|
show 2 more comments
Good stuff. This was what I was looking for... Now Just inject into Database... Thank you for your time!
– Ivo Yordanov
Nov 27 '18 at 18:49
It seems there is a syntaxis problem when I execute it: ")syntax error: invalid arithmetic operator (error token is " 1: Value1 2: Value1
– Ivo Yordanov
Nov 27 '18 at 19:00
Can you check if it is not missing something from the one executed? Script is exactly the same. I am executing on Ubuntu 16.04 bash version 4.3.48
– Ivo Yordanov
Nov 27 '18 at 19:04
Add this to your script please:trap ' echo "ERROR $? at $0:$LINENO - [$BASH_COMMAND]" ' err
- it looks like you are trying to execute my output. (You likely don't need the printout at the bottom. I only put that in for you to see the alignment.)
– Paul Hodges
Nov 27 '18 at 19:15
You might also want to adjust the variable names. I'll add some commentary to make it easier.
– Paul Hodges
Nov 27 '18 at 19:15
Good stuff. This was what I was looking for... Now Just inject into Database... Thank you for your time!
– Ivo Yordanov
Nov 27 '18 at 18:49
Good stuff. This was what I was looking for... Now Just inject into Database... Thank you for your time!
– Ivo Yordanov
Nov 27 '18 at 18:49
It seems there is a syntaxis problem when I execute it: ")syntax error: invalid arithmetic operator (error token is " 1: Value1 2: Value1
– Ivo Yordanov
Nov 27 '18 at 19:00
It seems there is a syntaxis problem when I execute it: ")syntax error: invalid arithmetic operator (error token is " 1: Value1 2: Value1
– Ivo Yordanov
Nov 27 '18 at 19:00
Can you check if it is not missing something from the one executed? Script is exactly the same. I am executing on Ubuntu 16.04 bash version 4.3.48
– Ivo Yordanov
Nov 27 '18 at 19:04
Can you check if it is not missing something from the one executed? Script is exactly the same. I am executing on Ubuntu 16.04 bash version 4.3.48
– Ivo Yordanov
Nov 27 '18 at 19:04
Add this to your script please:
trap ' echo "ERROR $? at $0:$LINENO - [$BASH_COMMAND]" ' err
- it looks like you are trying to execute my output. (You likely don't need the printout at the bottom. I only put that in for you to see the alignment.)– Paul Hodges
Nov 27 '18 at 19:15
Add this to your script please:
trap ' echo "ERROR $? at $0:$LINENO - [$BASH_COMMAND]" ' err
- it looks like you are trying to execute my output. (You likely don't need the printout at the bottom. I only put that in for you to see the alignment.)– Paul Hodges
Nov 27 '18 at 19:15
You might also want to adjust the variable names. I'll add some commentary to make it easier.
– Paul Hodges
Nov 27 '18 at 19:15
You might also want to adjust the variable names. I'll add some commentary to make it easier.
– Paul Hodges
Nov 27 '18 at 19:15
|
show 2 more comments
Finally I went for a simpler method according to the comments I got to simplify:
cat $1 | tail -n+2 | awk -F ";" '{print $2}' | awk '{ for (i=1; i<=NF; i++) print $i }' | sed 's/r//g' | sort -u > segments.txt
IFS=$'rn' GLOBIGNORE='*' command eval 'segments=($(cat segments.txt))'
for (( i = 0; i < ${#segments[@]}; i++ )); do
cat $1 | grep -w ${segments[i]} | awk -F ";" '{print $1}' | tr 'n' ' ' | sed 's/^/ /g' > ${segments[i]}.txt
done
rm segments.txt
And then just process the remaining txt files in the folder.
I would really like to see this done in the way I was going initially though as it is more suitable for more data...
This apparently generates a single column of unique values from the second field of the file, which is your example is the digits from 1 to 5 - then whole-wordgrep
s those digits back from the CSV to put the first column value into files named <digit>.txt for each digit on the line. Am I reading this right?
– Paul Hodges
Nov 27 '18 at 15:16
This logic will overwrite "Value1" in 2.txt with "Value2" instead of listing both. Is that the result you wanted? Or would you prefer both be in the file? Did you mean for them to be space-delimited on one line? And did you mean for the lines to have newline endings, or explicitly not?
– Paul Hodges
Nov 27 '18 at 15:51
@PaulHodges The values from column1 should be ordered into sets. Each set has the name of the numerical value from column2. Only the values from column1 are important to appear in the final result... You get sets in the end... Imagine each set as the independent file generated...
– Ivo Yordanov
Nov 27 '18 at 16:08
Both sets have 2's in them, and you are writing a file for each number in the set, so there is a 2 in Value1 and Value2, but you are doing a truncating write. I made it an appending write. You were converting newlines to spaces, so I made both values write out with space-delimiting on the same line of the file. Please see my solution. If you prefer newlines, it's easy to make the values one per line. If this isn't what you meant, then you need to explain more clearly, with examples of what the output should be.
– Paul Hodges
Nov 27 '18 at 16:14
@PaulHodges I did look at it... Please look at the answer I posted as comments are too short.
– Ivo Yordanov
Nov 27 '18 at 16:30
add a comment |
Finally I went for a simpler method according to the comments I got to simplify:
cat $1 | tail -n+2 | awk -F ";" '{print $2}' | awk '{ for (i=1; i<=NF; i++) print $i }' | sed 's/r//g' | sort -u > segments.txt
IFS=$'rn' GLOBIGNORE='*' command eval 'segments=($(cat segments.txt))'
for (( i = 0; i < ${#segments[@]}; i++ )); do
cat $1 | grep -w ${segments[i]} | awk -F ";" '{print $1}' | tr 'n' ' ' | sed 's/^/ /g' > ${segments[i]}.txt
done
rm segments.txt
And then just process the remaining txt files in the folder.
I would really like to see this done in the way I was going initially though as it is more suitable for more data...
This apparently generates a single column of unique values from the second field of the file, which is your example is the digits from 1 to 5 - then whole-wordgrep
s those digits back from the CSV to put the first column value into files named <digit>.txt for each digit on the line. Am I reading this right?
– Paul Hodges
Nov 27 '18 at 15:16
This logic will overwrite "Value1" in 2.txt with "Value2" instead of listing both. Is that the result you wanted? Or would you prefer both be in the file? Did you mean for them to be space-delimited on one line? And did you mean for the lines to have newline endings, or explicitly not?
– Paul Hodges
Nov 27 '18 at 15:51
@PaulHodges The values from column1 should be ordered into sets. Each set has the name of the numerical value from column2. Only the values from column1 are important to appear in the final result... You get sets in the end... Imagine each set as the independent file generated...
– Ivo Yordanov
Nov 27 '18 at 16:08
Both sets have 2's in them, and you are writing a file for each number in the set, so there is a 2 in Value1 and Value2, but you are doing a truncating write. I made it an appending write. You were converting newlines to spaces, so I made both values write out with space-delimiting on the same line of the file. Please see my solution. If you prefer newlines, it's easy to make the values one per line. If this isn't what you meant, then you need to explain more clearly, with examples of what the output should be.
– Paul Hodges
Nov 27 '18 at 16:14
@PaulHodges I did look at it... Please look at the answer I posted as comments are too short.
– Ivo Yordanov
Nov 27 '18 at 16:30
add a comment |
Finally I went for a simpler method according to the comments I got to simplify:
cat $1 | tail -n+2 | awk -F ";" '{print $2}' | awk '{ for (i=1; i<=NF; i++) print $i }' | sed 's/r//g' | sort -u > segments.txt
IFS=$'rn' GLOBIGNORE='*' command eval 'segments=($(cat segments.txt))'
for (( i = 0; i < ${#segments[@]}; i++ )); do
cat $1 | grep -w ${segments[i]} | awk -F ";" '{print $1}' | tr 'n' ' ' | sed 's/^/ /g' > ${segments[i]}.txt
done
rm segments.txt
And then just process the remaining txt files in the folder.
I would really like to see this done in the way I was going initially though as it is more suitable for more data...
Finally I went for a simpler method according to the comments I got to simplify:
cat $1 | tail -n+2 | awk -F ";" '{print $2}' | awk '{ for (i=1; i<=NF; i++) print $i }' | sed 's/r//g' | sort -u > segments.txt
IFS=$'rn' GLOBIGNORE='*' command eval 'segments=($(cat segments.txt))'
for (( i = 0; i < ${#segments[@]}; i++ )); do
cat $1 | grep -w ${segments[i]} | awk -F ";" '{print $1}' | tr 'n' ' ' | sed 's/^/ /g' > ${segments[i]}.txt
done
rm segments.txt
And then just process the remaining txt files in the folder.
I would really like to see this done in the way I was going initially though as it is more suitable for more data...
answered Nov 27 '18 at 12:33
Ivo YordanovIvo Yordanov
848
848
This apparently generates a single column of unique values from the second field of the file, which is your example is the digits from 1 to 5 - then whole-wordgrep
s those digits back from the CSV to put the first column value into files named <digit>.txt for each digit on the line. Am I reading this right?
– Paul Hodges
Nov 27 '18 at 15:16
This logic will overwrite "Value1" in 2.txt with "Value2" instead of listing both. Is that the result you wanted? Or would you prefer both be in the file? Did you mean for them to be space-delimited on one line? And did you mean for the lines to have newline endings, or explicitly not?
– Paul Hodges
Nov 27 '18 at 15:51
@PaulHodges The values from column1 should be ordered into sets. Each set has the name of the numerical value from column2. Only the values from column1 are important to appear in the final result... You get sets in the end... Imagine each set as the independent file generated...
– Ivo Yordanov
Nov 27 '18 at 16:08
Both sets have 2's in them, and you are writing a file for each number in the set, so there is a 2 in Value1 and Value2, but you are doing a truncating write. I made it an appending write. You were converting newlines to spaces, so I made both values write out with space-delimiting on the same line of the file. Please see my solution. If you prefer newlines, it's easy to make the values one per line. If this isn't what you meant, then you need to explain more clearly, with examples of what the output should be.
– Paul Hodges
Nov 27 '18 at 16:14
@PaulHodges I did look at it... Please look at the answer I posted as comments are too short.
– Ivo Yordanov
Nov 27 '18 at 16:30
add a comment |
This apparently generates a single column of unique values from the second field of the file, which is your example is the digits from 1 to 5 - then whole-wordgrep
s those digits back from the CSV to put the first column value into files named <digit>.txt for each digit on the line. Am I reading this right?
– Paul Hodges
Nov 27 '18 at 15:16
This logic will overwrite "Value1" in 2.txt with "Value2" instead of listing both. Is that the result you wanted? Or would you prefer both be in the file? Did you mean for them to be space-delimited on one line? And did you mean for the lines to have newline endings, or explicitly not?
– Paul Hodges
Nov 27 '18 at 15:51
@PaulHodges The values from column1 should be ordered into sets. Each set has the name of the numerical value from column2. Only the values from column1 are important to appear in the final result... You get sets in the end... Imagine each set as the independent file generated...
– Ivo Yordanov
Nov 27 '18 at 16:08
Both sets have 2's in them, and you are writing a file for each number in the set, so there is a 2 in Value1 and Value2, but you are doing a truncating write. I made it an appending write. You were converting newlines to spaces, so I made both values write out with space-delimiting on the same line of the file. Please see my solution. If you prefer newlines, it's easy to make the values one per line. If this isn't what you meant, then you need to explain more clearly, with examples of what the output should be.
– Paul Hodges
Nov 27 '18 at 16:14
@PaulHodges I did look at it... Please look at the answer I posted as comments are too short.
– Ivo Yordanov
Nov 27 '18 at 16:30
This apparently generates a single column of unique values from the second field of the file, which is your example is the digits from 1 to 5 - then whole-word
grep
s those digits back from the CSV to put the first column value into files named <digit>.txt for each digit on the line. Am I reading this right?– Paul Hodges
Nov 27 '18 at 15:16
This apparently generates a single column of unique values from the second field of the file, which is your example is the digits from 1 to 5 - then whole-word
grep
s those digits back from the CSV to put the first column value into files named <digit>.txt for each digit on the line. Am I reading this right?– Paul Hodges
Nov 27 '18 at 15:16
This logic will overwrite "Value1" in 2.txt with "Value2" instead of listing both. Is that the result you wanted? Or would you prefer both be in the file? Did you mean for them to be space-delimited on one line? And did you mean for the lines to have newline endings, or explicitly not?
– Paul Hodges
Nov 27 '18 at 15:51
This logic will overwrite "Value1" in 2.txt with "Value2" instead of listing both. Is that the result you wanted? Or would you prefer both be in the file? Did you mean for them to be space-delimited on one line? And did you mean for the lines to have newline endings, or explicitly not?
– Paul Hodges
Nov 27 '18 at 15:51
@PaulHodges The values from column1 should be ordered into sets. Each set has the name of the numerical value from column2. Only the values from column1 are important to appear in the final result... You get sets in the end... Imagine each set as the independent file generated...
– Ivo Yordanov
Nov 27 '18 at 16:08
@PaulHodges The values from column1 should be ordered into sets. Each set has the name of the numerical value from column2. Only the values from column1 are important to appear in the final result... You get sets in the end... Imagine each set as the independent file generated...
– Ivo Yordanov
Nov 27 '18 at 16:08
Both sets have 2's in them, and you are writing a file for each number in the set, so there is a 2 in Value1 and Value2, but you are doing a truncating write. I made it an appending write. You were converting newlines to spaces, so I made both values write out with space-delimiting on the same line of the file. Please see my solution. If you prefer newlines, it's easy to make the values one per line. If this isn't what you meant, then you need to explain more clearly, with examples of what the output should be.
– Paul Hodges
Nov 27 '18 at 16:14
Both sets have 2's in them, and you are writing a file for each number in the set, so there is a 2 in Value1 and Value2, but you are doing a truncating write. I made it an appending write. You were converting newlines to spaces, so I made both values write out with space-delimiting on the same line of the file. Please see my solution. If you prefer newlines, it's easy to make the values one per line. If this isn't what you meant, then you need to explain more clearly, with examples of what the output should be.
– Paul Hodges
Nov 27 '18 at 16:14
@PaulHodges I did look at it... Please look at the answer I posted as comments are too short.
– Ivo Yordanov
Nov 27 '18 at 16:30
@PaulHodges I did look at it... Please look at the answer I posted as comments are too short.
– Ivo Yordanov
Nov 27 '18 at 16:30
add a comment |
If I'm reading it right, and assuming you are ok with having no proper newline on the files since you were explicitly squashing them out, this should do all the above in one awk
call.
awk -F'[; ]' '/;[0-9] / { for (i=2; i<=NF; i++) printf "%s ", $1 > $i".txt" }' yourInputFile
Yeah, I stick with the array formatting from previously... Can also go with the new line, depends on what to do next with the sets. The solution works for the example but not with a random input file respecting the same format as the example.
– Ivo Yordanov
Nov 27 '18 at 16:18
Then you need to provide specs and examples that demonstrate the problem. We can only work with what's given.
– Paul Hodges
Nov 27 '18 at 16:24
My purpose is just to get to the sets construct... Ideally, it should be done with arrays rather than with text files and keep the layer of abstraction, but I guess I need to use something else than bash for that...
– Ivo Yordanov
Nov 27 '18 at 16:34
Abstraction is fine - is very good, in fact - but if you callawk
twice in a pipeline, can you specify why you didn't just handle both tasks in one call? It's like the useless use of cat. Don't add more processes and complicate the line for maintainers. Please take some time to define your needs and explain them. We're here to help, but we need data.
– Paul Hodges
Nov 27 '18 at 16:46
Could have used sed or cut in place of one of the awk's... Maybe I should remove everything (rest of the script included) and just write an awk script because it can be done with awk one-liner (really long line) according to you? If someone would maintain it other than me would get lost. Extra clarity is important. Most people don't know how to code in awk, me included. I expect this to be maintained by people who don't know bash well, not to mention awk...
– Ivo Yordanov
Nov 27 '18 at 17:05
|
show 1 more comment
If I'm reading it right, and assuming you are ok with having no proper newline on the files since you were explicitly squashing them out, this should do all the above in one awk
call.
awk -F'[; ]' '/;[0-9] / { for (i=2; i<=NF; i++) printf "%s ", $1 > $i".txt" }' yourInputFile
Yeah, I stick with the array formatting from previously... Can also go with the new line, depends on what to do next with the sets. The solution works for the example but not with a random input file respecting the same format as the example.
– Ivo Yordanov
Nov 27 '18 at 16:18
Then you need to provide specs and examples that demonstrate the problem. We can only work with what's given.
– Paul Hodges
Nov 27 '18 at 16:24
My purpose is just to get to the sets construct... Ideally, it should be done with arrays rather than with text files and keep the layer of abstraction, but I guess I need to use something else than bash for that...
– Ivo Yordanov
Nov 27 '18 at 16:34
Abstraction is fine - is very good, in fact - but if you callawk
twice in a pipeline, can you specify why you didn't just handle both tasks in one call? It's like the useless use of cat. Don't add more processes and complicate the line for maintainers. Please take some time to define your needs and explain them. We're here to help, but we need data.
– Paul Hodges
Nov 27 '18 at 16:46
Could have used sed or cut in place of one of the awk's... Maybe I should remove everything (rest of the script included) and just write an awk script because it can be done with awk one-liner (really long line) according to you? If someone would maintain it other than me would get lost. Extra clarity is important. Most people don't know how to code in awk, me included. I expect this to be maintained by people who don't know bash well, not to mention awk...
– Ivo Yordanov
Nov 27 '18 at 17:05
|
show 1 more comment
If I'm reading it right, and assuming you are ok with having no proper newline on the files since you were explicitly squashing them out, this should do all the above in one awk
call.
awk -F'[; ]' '/;[0-9] / { for (i=2; i<=NF; i++) printf "%s ", $1 > $i".txt" }' yourInputFile
If I'm reading it right, and assuming you are ok with having no proper newline on the files since you were explicitly squashing them out, this should do all the above in one awk
call.
awk -F'[; ]' '/;[0-9] / { for (i=2; i<=NF; i++) printf "%s ", $1 > $i".txt" }' yourInputFile
answered Nov 27 '18 at 16:00
Paul HodgesPaul Hodges
3,9361524
3,9361524
Yeah, I stick with the array formatting from previously... Can also go with the new line, depends on what to do next with the sets. The solution works for the example but not with a random input file respecting the same format as the example.
– Ivo Yordanov
Nov 27 '18 at 16:18
Then you need to provide specs and examples that demonstrate the problem. We can only work with what's given.
– Paul Hodges
Nov 27 '18 at 16:24
My purpose is just to get to the sets construct... Ideally, it should be done with arrays rather than with text files and keep the layer of abstraction, but I guess I need to use something else than bash for that...
– Ivo Yordanov
Nov 27 '18 at 16:34
Abstraction is fine - is very good, in fact - but if you callawk
twice in a pipeline, can you specify why you didn't just handle both tasks in one call? It's like the useless use of cat. Don't add more processes and complicate the line for maintainers. Please take some time to define your needs and explain them. We're here to help, but we need data.
– Paul Hodges
Nov 27 '18 at 16:46
Could have used sed or cut in place of one of the awk's... Maybe I should remove everything (rest of the script included) and just write an awk script because it can be done with awk one-liner (really long line) according to you? If someone would maintain it other than me would get lost. Extra clarity is important. Most people don't know how to code in awk, me included. I expect this to be maintained by people who don't know bash well, not to mention awk...
– Ivo Yordanov
Nov 27 '18 at 17:05
|
show 1 more comment
Yeah, I stick with the array formatting from previously... Can also go with the new line, depends on what to do next with the sets. The solution works for the example but not with a random input file respecting the same format as the example.
– Ivo Yordanov
Nov 27 '18 at 16:18
Then you need to provide specs and examples that demonstrate the problem. We can only work with what's given.
– Paul Hodges
Nov 27 '18 at 16:24
My purpose is just to get to the sets construct... Ideally, it should be done with arrays rather than with text files and keep the layer of abstraction, but I guess I need to use something else than bash for that...
– Ivo Yordanov
Nov 27 '18 at 16:34
Abstraction is fine - is very good, in fact - but if you callawk
twice in a pipeline, can you specify why you didn't just handle both tasks in one call? It's like the useless use of cat. Don't add more processes and complicate the line for maintainers. Please take some time to define your needs and explain them. We're here to help, but we need data.
– Paul Hodges
Nov 27 '18 at 16:46
Could have used sed or cut in place of one of the awk's... Maybe I should remove everything (rest of the script included) and just write an awk script because it can be done with awk one-liner (really long line) according to you? If someone would maintain it other than me would get lost. Extra clarity is important. Most people don't know how to code in awk, me included. I expect this to be maintained by people who don't know bash well, not to mention awk...
– Ivo Yordanov
Nov 27 '18 at 17:05
Yeah, I stick with the array formatting from previously... Can also go with the new line, depends on what to do next with the sets. The solution works for the example but not with a random input file respecting the same format as the example.
– Ivo Yordanov
Nov 27 '18 at 16:18
Yeah, I stick with the array formatting from previously... Can also go with the new line, depends on what to do next with the sets. The solution works for the example but not with a random input file respecting the same format as the example.
– Ivo Yordanov
Nov 27 '18 at 16:18
Then you need to provide specs and examples that demonstrate the problem. We can only work with what's given.
– Paul Hodges
Nov 27 '18 at 16:24
Then you need to provide specs and examples that demonstrate the problem. We can only work with what's given.
– Paul Hodges
Nov 27 '18 at 16:24
My purpose is just to get to the sets construct... Ideally, it should be done with arrays rather than with text files and keep the layer of abstraction, but I guess I need to use something else than bash for that...
– Ivo Yordanov
Nov 27 '18 at 16:34
My purpose is just to get to the sets construct... Ideally, it should be done with arrays rather than with text files and keep the layer of abstraction, but I guess I need to use something else than bash for that...
– Ivo Yordanov
Nov 27 '18 at 16:34
Abstraction is fine - is very good, in fact - but if you call
awk
twice in a pipeline, can you specify why you didn't just handle both tasks in one call? It's like the useless use of cat. Don't add more processes and complicate the line for maintainers. Please take some time to define your needs and explain them. We're here to help, but we need data.– Paul Hodges
Nov 27 '18 at 16:46
Abstraction is fine - is very good, in fact - but if you call
awk
twice in a pipeline, can you specify why you didn't just handle both tasks in one call? It's like the useless use of cat. Don't add more processes and complicate the line for maintainers. Please take some time to define your needs and explain them. We're here to help, but we need data.– Paul Hodges
Nov 27 '18 at 16:46
Could have used sed or cut in place of one of the awk's... Maybe I should remove everything (rest of the script included) and just write an awk script because it can be done with awk one-liner (really long line) according to you? If someone would maintain it other than me would get lost. Extra clarity is important. Most people don't know how to code in awk, me included. I expect this to be maintained by people who don't know bash well, not to mention awk...
– Ivo Yordanov
Nov 27 '18 at 17:05
Could have used sed or cut in place of one of the awk's... Maybe I should remove everything (rest of the script included) and just write an awk script because it can be done with awk one-liner (really long line) according to you? If someone would maintain it other than me would get lost. Extra clarity is important. Most people don't know how to code in awk, me included. I expect this to be maintained by people who don't know bash well, not to mention awk...
– Ivo Yordanov
Nov 27 '18 at 17:05
|
show 1 more comment
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53487056%2farray-values-reutilization-in-another-array-definition%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Time to switch to a real programming language.
– choroba
Nov 26 '18 at 18:43
Very funny! Please try to be helpful...
– Ivo Yordanov
Nov 26 '18 at 19:30
1
You are more likely to get help if you boil down your problem. And I don't get what
Header2
from the second column has to do with the values from the first column. Also,2
is not a valid name for a variable.– Socowi
Nov 26 '18 at 19:44
I'm not trying to be funny. Every time my shell script is longer than 10 lines or needs real variables and data structures, I rewrite it in Perl.
– choroba
Nov 26 '18 at 19:46
@Socowi it does not on the example file... It merely is a header, nothing more... 2 is an example...
– Ivo Yordanov
Nov 26 '18 at 19:54