Bash - Looping through Array in Nested [FOR, WHILE, IF] statements The 2019 Stack Overflow...
How did passengers keep warm on sail ships?
The phrase "to the numbers born"?
Output the Arecibo Message
How come people say “Would of”?
Can an undergraduate be advised by a professor who is very far away?
How to quickly solve partial fractions equation?
Why can I use a list index as an indexing variable in a for loop?
ELI5: Why they say that Israel would have been the fourth country to land a spacecraft on the Moon and why they call it low cost?
Cooking pasta in a water boiler
Kerning for subscripts of sigma?
Is it ok to offer lower paid work as a trial period before negotiating for a full-time job?
A word that means fill it to the required quantity
Geography at the pixel level
Why is this code so slow?
Why didn't the Event Horizon Telescope team mention Sagittarius A*?
How to type this arrow in math mode?
Getting crown tickets for Statue of Liberty
Variable with quotation marks "$()"
Worn-tile Scrabble
Why isn't the circumferential light around the M87 black hole's event horizon symmetric?
Is bread bad for ducks?
Falsification in Math vs Science
Is Cinnamon a desktop environment or a window manager? (Or both?)
Why doesn't shell automatically fix "useless use of cat"?
Bash - Looping through Array in Nested [FOR, WHILE, IF] statements
The 2019 Stack Overflow Developer Survey Results Are InWhy is using a shell loop to process text considered bad practice?How do I test if an item is in a bash array?How to name a file in the deepest level of a directory treeBash: Looping through a stringBASH: looping through lsBash script for looping through filesload bash comands from file one per line and execute them for each file in a directoryBASH attempting to leave nested statements/loops/functionsLooping through lines in several files (bash)Bash - Looping through nested for loop using arraysLooping through an array with for gives different resultsError in Bash Script Nested Conditional Statements
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}
I am trying to process a large file-set, appending specific lines into the "test_result.txt" file - I achieved it -not very elegantly- with the following code.
for i in *merged; do
while read -r lo; do
if [[ $lo == *"ID"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"Instance"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"NOT"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"AI"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"Sitting"* ]]; then
echo $lo >> test_result.txt
done < $i
done
However, I am trying to size-it-down using an array - which resulted in quite an unsuccessful attempt.
KEYWORDS=("ID" "Instance" "NOT" "AI" "Sitting" )
KEY_COUNT=0
for i in *merged; do
while read -r lo; do
if [[$lo == ${KEYWORDS[@]} ]]; then
echo $lo >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
fi
done < $i
done
bash
New contributor
add a comment |
I am trying to process a large file-set, appending specific lines into the "test_result.txt" file - I achieved it -not very elegantly- with the following code.
for i in *merged; do
while read -r lo; do
if [[ $lo == *"ID"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"Instance"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"NOT"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"AI"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"Sitting"* ]]; then
echo $lo >> test_result.txt
done < $i
done
However, I am trying to size-it-down using an array - which resulted in quite an unsuccessful attempt.
KEYWORDS=("ID" "Instance" "NOT" "AI" "Sitting" )
KEY_COUNT=0
for i in *merged; do
while read -r lo; do
if [[$lo == ${KEYWORDS[@]} ]]; then
echo $lo >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
fi
done < $i
done
bash
New contributor
3
How large is the file set? This sounds like an XY problem that could be better accomplished by a straightforwardgrep
command.
– steeldriver
yesterday
2
Small side note: Instead ofKEY_COUNT="`expr $KEY_COUNT + 1`"
you could also write((KEY_COUNT++))
– Freddy
yesterday
add a comment |
I am trying to process a large file-set, appending specific lines into the "test_result.txt" file - I achieved it -not very elegantly- with the following code.
for i in *merged; do
while read -r lo; do
if [[ $lo == *"ID"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"Instance"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"NOT"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"AI"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"Sitting"* ]]; then
echo $lo >> test_result.txt
done < $i
done
However, I am trying to size-it-down using an array - which resulted in quite an unsuccessful attempt.
KEYWORDS=("ID" "Instance" "NOT" "AI" "Sitting" )
KEY_COUNT=0
for i in *merged; do
while read -r lo; do
if [[$lo == ${KEYWORDS[@]} ]]; then
echo $lo >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
fi
done < $i
done
bash
New contributor
I am trying to process a large file-set, appending specific lines into the "test_result.txt" file - I achieved it -not very elegantly- with the following code.
for i in *merged; do
while read -r lo; do
if [[ $lo == *"ID"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"Instance"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"NOT"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"AI"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"Sitting"* ]]; then
echo $lo >> test_result.txt
done < $i
done
However, I am trying to size-it-down using an array - which resulted in quite an unsuccessful attempt.
KEYWORDS=("ID" "Instance" "NOT" "AI" "Sitting" )
KEY_COUNT=0
for i in *merged; do
while read -r lo; do
if [[$lo == ${KEYWORDS[@]} ]]; then
echo $lo >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
fi
done < $i
done
bash
bash
New contributor
New contributor
edited yesterday
Rui F Ribeiro
42k1483142
42k1483142
New contributor
asked yesterday
AF.BJAF.BJ
185
185
New contributor
New contributor
3
How large is the file set? This sounds like an XY problem that could be better accomplished by a straightforwardgrep
command.
– steeldriver
yesterday
2
Small side note: Instead ofKEY_COUNT="`expr $KEY_COUNT + 1`"
you could also write((KEY_COUNT++))
– Freddy
yesterday
add a comment |
3
How large is the file set? This sounds like an XY problem that could be better accomplished by a straightforwardgrep
command.
– steeldriver
yesterday
2
Small side note: Instead ofKEY_COUNT="`expr $KEY_COUNT + 1`"
you could also write((KEY_COUNT++))
– Freddy
yesterday
3
3
How large is the file set? This sounds like an XY problem that could be better accomplished by a straightforward
grep
command.– steeldriver
yesterday
How large is the file set? This sounds like an XY problem that could be better accomplished by a straightforward
grep
command.– steeldriver
yesterday
2
2
Small side note: Instead of
KEY_COUNT="`expr $KEY_COUNT + 1`"
you could also write ((KEY_COUNT++))
– Freddy
yesterday
Small side note: Instead of
KEY_COUNT="`expr $KEY_COUNT + 1`"
you could also write ((KEY_COUNT++))
– Freddy
yesterday
add a comment |
2 Answers
2
active
oldest
votes
It looks like you want to get all the lines that contains at least one out of a set of words, from a set of files.
Assuming that you don't have many thousands of files, you could do that with a single grep
command:
grep -wE '(ID|Instance|NOT|AI|Sitting)' ./*merged >outputfile
This would extract the lines matching any of the words listed in the pattern from the files whose names matches *merged
.
The -w
with grep
ensures that the given strings are not matched as substrings (i.e. NOT
will not be matched in NOTICE
). The -E
option enables the alternation with |
in the pattern.
Add the -h
option to the command if you don't want the names of the files containing matching lines in the output.
If you do have many thousands of files, the above command may fail due to expanding to a too long command line. In that case, you may want to do something like
for file in ./*merged; do
grep -wE '(ID|Instance|NOT|AI|Sitting)' "$file"
done >outputfile
which would run the grep
command once on each file, or,
find . -maxdepth 1 -type f -name '*merged'
-exec grep -wE '(ID|Instance|NOT|AI|Sitting)' {} + >outputfile
which would do as few invocations of grep
as possible with as many files as possible at once.
Related:
- Why is using a shell loop to process text considered bad practice?
1
It is indeed a file-set of a few thousand. Originally, I built other processes into the loop but running grep separately - before the extra tweakings - it's a cleaner solution. Just needed to add the "-h" option to suppress default prefixes - Thnks.
– AF.BJ
yesterday
@AF.BJ since this answer solved your problem, consider accepting it: What should I do when someone answers my question?
– muru
yesterday
add a comment |
Adding an array doesn't particularly help: you still would need to loop over the elements of the array (see How do I test if an item is in a bash array?):
while read -r lo; do
for keyword in "${keywords[@]}"; do
if [[ $lo == *$keyword* ]]; then
echo $lo >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
fi
done
done < "$i"
It might be better to use a case
statement:
while read -r lo; do
case $lo in
*(ID|Instance|NOT|AI|Sitting)*)
echo "$lo" >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
;;
esac
done < "$i"
(I assume you do further processing of these lines within the loop. If not, grep or awk could do this more efficiently.)
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "106"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
AF.BJ is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f511636%2fbash-looping-through-array-in-nested-for-while-if-statements%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
It looks like you want to get all the lines that contains at least one out of a set of words, from a set of files.
Assuming that you don't have many thousands of files, you could do that with a single grep
command:
grep -wE '(ID|Instance|NOT|AI|Sitting)' ./*merged >outputfile
This would extract the lines matching any of the words listed in the pattern from the files whose names matches *merged
.
The -w
with grep
ensures that the given strings are not matched as substrings (i.e. NOT
will not be matched in NOTICE
). The -E
option enables the alternation with |
in the pattern.
Add the -h
option to the command if you don't want the names of the files containing matching lines in the output.
If you do have many thousands of files, the above command may fail due to expanding to a too long command line. In that case, you may want to do something like
for file in ./*merged; do
grep -wE '(ID|Instance|NOT|AI|Sitting)' "$file"
done >outputfile
which would run the grep
command once on each file, or,
find . -maxdepth 1 -type f -name '*merged'
-exec grep -wE '(ID|Instance|NOT|AI|Sitting)' {} + >outputfile
which would do as few invocations of grep
as possible with as many files as possible at once.
Related:
- Why is using a shell loop to process text considered bad practice?
1
It is indeed a file-set of a few thousand. Originally, I built other processes into the loop but running grep separately - before the extra tweakings - it's a cleaner solution. Just needed to add the "-h" option to suppress default prefixes - Thnks.
– AF.BJ
yesterday
@AF.BJ since this answer solved your problem, consider accepting it: What should I do when someone answers my question?
– muru
yesterday
add a comment |
It looks like you want to get all the lines that contains at least one out of a set of words, from a set of files.
Assuming that you don't have many thousands of files, you could do that with a single grep
command:
grep -wE '(ID|Instance|NOT|AI|Sitting)' ./*merged >outputfile
This would extract the lines matching any of the words listed in the pattern from the files whose names matches *merged
.
The -w
with grep
ensures that the given strings are not matched as substrings (i.e. NOT
will not be matched in NOTICE
). The -E
option enables the alternation with |
in the pattern.
Add the -h
option to the command if you don't want the names of the files containing matching lines in the output.
If you do have many thousands of files, the above command may fail due to expanding to a too long command line. In that case, you may want to do something like
for file in ./*merged; do
grep -wE '(ID|Instance|NOT|AI|Sitting)' "$file"
done >outputfile
which would run the grep
command once on each file, or,
find . -maxdepth 1 -type f -name '*merged'
-exec grep -wE '(ID|Instance|NOT|AI|Sitting)' {} + >outputfile
which would do as few invocations of grep
as possible with as many files as possible at once.
Related:
- Why is using a shell loop to process text considered bad practice?
1
It is indeed a file-set of a few thousand. Originally, I built other processes into the loop but running grep separately - before the extra tweakings - it's a cleaner solution. Just needed to add the "-h" option to suppress default prefixes - Thnks.
– AF.BJ
yesterday
@AF.BJ since this answer solved your problem, consider accepting it: What should I do when someone answers my question?
– muru
yesterday
add a comment |
It looks like you want to get all the lines that contains at least one out of a set of words, from a set of files.
Assuming that you don't have many thousands of files, you could do that with a single grep
command:
grep -wE '(ID|Instance|NOT|AI|Sitting)' ./*merged >outputfile
This would extract the lines matching any of the words listed in the pattern from the files whose names matches *merged
.
The -w
with grep
ensures that the given strings are not matched as substrings (i.e. NOT
will not be matched in NOTICE
). The -E
option enables the alternation with |
in the pattern.
Add the -h
option to the command if you don't want the names of the files containing matching lines in the output.
If you do have many thousands of files, the above command may fail due to expanding to a too long command line. In that case, you may want to do something like
for file in ./*merged; do
grep -wE '(ID|Instance|NOT|AI|Sitting)' "$file"
done >outputfile
which would run the grep
command once on each file, or,
find . -maxdepth 1 -type f -name '*merged'
-exec grep -wE '(ID|Instance|NOT|AI|Sitting)' {} + >outputfile
which would do as few invocations of grep
as possible with as many files as possible at once.
Related:
- Why is using a shell loop to process text considered bad practice?
It looks like you want to get all the lines that contains at least one out of a set of words, from a set of files.
Assuming that you don't have many thousands of files, you could do that with a single grep
command:
grep -wE '(ID|Instance|NOT|AI|Sitting)' ./*merged >outputfile
This would extract the lines matching any of the words listed in the pattern from the files whose names matches *merged
.
The -w
with grep
ensures that the given strings are not matched as substrings (i.e. NOT
will not be matched in NOTICE
). The -E
option enables the alternation with |
in the pattern.
Add the -h
option to the command if you don't want the names of the files containing matching lines in the output.
If you do have many thousands of files, the above command may fail due to expanding to a too long command line. In that case, you may want to do something like
for file in ./*merged; do
grep -wE '(ID|Instance|NOT|AI|Sitting)' "$file"
done >outputfile
which would run the grep
command once on each file, or,
find . -maxdepth 1 -type f -name '*merged'
-exec grep -wE '(ID|Instance|NOT|AI|Sitting)' {} + >outputfile
which would do as few invocations of grep
as possible with as many files as possible at once.
Related:
- Why is using a shell loop to process text considered bad practice?
edited yesterday
answered yesterday
Kusalananda♦Kusalananda
141k17263439
141k17263439
1
It is indeed a file-set of a few thousand. Originally, I built other processes into the loop but running grep separately - before the extra tweakings - it's a cleaner solution. Just needed to add the "-h" option to suppress default prefixes - Thnks.
– AF.BJ
yesterday
@AF.BJ since this answer solved your problem, consider accepting it: What should I do when someone answers my question?
– muru
yesterday
add a comment |
1
It is indeed a file-set of a few thousand. Originally, I built other processes into the loop but running grep separately - before the extra tweakings - it's a cleaner solution. Just needed to add the "-h" option to suppress default prefixes - Thnks.
– AF.BJ
yesterday
@AF.BJ since this answer solved your problem, consider accepting it: What should I do when someone answers my question?
– muru
yesterday
1
1
It is indeed a file-set of a few thousand. Originally, I built other processes into the loop but running grep separately - before the extra tweakings - it's a cleaner solution. Just needed to add the "-h" option to suppress default prefixes - Thnks.
– AF.BJ
yesterday
It is indeed a file-set of a few thousand. Originally, I built other processes into the loop but running grep separately - before the extra tweakings - it's a cleaner solution. Just needed to add the "-h" option to suppress default prefixes - Thnks.
– AF.BJ
yesterday
@AF.BJ since this answer solved your problem, consider accepting it: What should I do when someone answers my question?
– muru
yesterday
@AF.BJ since this answer solved your problem, consider accepting it: What should I do when someone answers my question?
– muru
yesterday
add a comment |
Adding an array doesn't particularly help: you still would need to loop over the elements of the array (see How do I test if an item is in a bash array?):
while read -r lo; do
for keyword in "${keywords[@]}"; do
if [[ $lo == *$keyword* ]]; then
echo $lo >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
fi
done
done < "$i"
It might be better to use a case
statement:
while read -r lo; do
case $lo in
*(ID|Instance|NOT|AI|Sitting)*)
echo "$lo" >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
;;
esac
done < "$i"
(I assume you do further processing of these lines within the loop. If not, grep or awk could do this more efficiently.)
add a comment |
Adding an array doesn't particularly help: you still would need to loop over the elements of the array (see How do I test if an item is in a bash array?):
while read -r lo; do
for keyword in "${keywords[@]}"; do
if [[ $lo == *$keyword* ]]; then
echo $lo >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
fi
done
done < "$i"
It might be better to use a case
statement:
while read -r lo; do
case $lo in
*(ID|Instance|NOT|AI|Sitting)*)
echo "$lo" >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
;;
esac
done < "$i"
(I assume you do further processing of these lines within the loop. If not, grep or awk could do this more efficiently.)
add a comment |
Adding an array doesn't particularly help: you still would need to loop over the elements of the array (see How do I test if an item is in a bash array?):
while read -r lo; do
for keyword in "${keywords[@]}"; do
if [[ $lo == *$keyword* ]]; then
echo $lo >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
fi
done
done < "$i"
It might be better to use a case
statement:
while read -r lo; do
case $lo in
*(ID|Instance|NOT|AI|Sitting)*)
echo "$lo" >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
;;
esac
done < "$i"
(I assume you do further processing of these lines within the loop. If not, grep or awk could do this more efficiently.)
Adding an array doesn't particularly help: you still would need to loop over the elements of the array (see How do I test if an item is in a bash array?):
while read -r lo; do
for keyword in "${keywords[@]}"; do
if [[ $lo == *$keyword* ]]; then
echo $lo >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
fi
done
done < "$i"
It might be better to use a case
statement:
while read -r lo; do
case $lo in
*(ID|Instance|NOT|AI|Sitting)*)
echo "$lo" >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
;;
esac
done < "$i"
(I assume you do further processing of these lines within the loop. If not, grep or awk could do this more efficiently.)
answered yesterday
murumuru
37.6k589164
37.6k589164
add a comment |
add a comment |
AF.BJ is a new contributor. Be nice, and check out our Code of Conduct.
AF.BJ is a new contributor. Be nice, and check out our Code of Conduct.
AF.BJ is a new contributor. Be nice, and check out our Code of Conduct.
AF.BJ is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Unix & Linux Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f511636%2fbash-looping-through-array-in-nested-for-while-if-statements%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
3
How large is the file set? This sounds like an XY problem that could be better accomplished by a straightforward
grep
command.– steeldriver
yesterday
2
Small side note: Instead of
KEY_COUNT="`expr $KEY_COUNT + 1`"
you could also write((KEY_COUNT++))
– Freddy
yesterday