Shell是一个命令行解释器,它接收应用程序/用户命令,然后调用操作系统内核。
1. Linux提供的解析器
[fish@hadoop101 ~]$ cat /etc/shells
/bin/sh
/bin/bash
/sbin/nologin
/usr/bin/sh
/usr/bin/bash
/usr/sbin/nologin
2. bash 和 sh的关系
[fish@hadoop101 ~]$ cd /bin
[fish@hadoop101 bin]$ ll | grep bash
-rwxr-xr-x. 1 root root 964544 4月 11 2018 bash
lrwxrwxrwx. 1 root root 10 8月 19 15:48 bashbug -> bashbug-64
-rwxr-xr-x. 1 root root 6964 4月 11 2018 bashbug-64
lrwxrwxrwx. 1 root root 4 8月 19 15:48 sh -> bash
3. CentOS解析器是 bash
[fish@hadoop101 bin]$ echo $SHELL
/bin/bash
脚本文件名后缀为.sh
,脚本开头为#!/bin/bash
脚本实现功能:新建.txt
文件并写入Hello World!
[fish@hadoop101 ~]$ mkdir data
[fish@hadoop101 ~]$ cd data
[fish@hadoop101 data]$ vim helloworld.sh
#!/bin/bash
cd /home/fish/data
touch helloworld.txt
echo "Hello World!" >> helloworld.txt
[fish@hadoop101 data]$ /bin/bash helloworld.sh
[fish@hadoop101 data]$ cat helloworld.txt
Hello World!
1. 查看系统变量的值
[fish@hadoop101 data]$ echo $HOME $PWD $SHELL $USER/home/fish /home/fish/data /bin/bash fish
2. 显示当前Shell中的所有变量
[fish@hadoop101 data]$ set | grep bash
BASH=/bin/bash
HISTFILE=/home/fish/.bash_history
SHELL=/bin/bash
1. 定义变量、重新赋值
[fish@hadoop101 data]$ A=5
[fish@hadoop101 data]$ echo $A
5
[fish@hadoop101 data]$ A=6
[fish@hadoop101 data]$ echo $A
6
2. 撤销变量
[fish@hadoop101 data]$ unset A
[fish@hadoop101 data]$ echo $A
3. 声明静态变量,不能撤销
大概需要重启才能消失
[fish@hadoop101 data]$ readonly B=2
[fish@hadoop101 data]$ echo $B
2
[fish@hadoop101 data]$ B=3
-bash: B: 只读变量
[fish@hadoop101 data]$ unset B
-bash: unset: B: 无法反设定: 只读 variable
4. 把变量提升为全局环境变量
[fish@hadoop101 data]$ vim exportTest.sh
#!/bin/bash
echo $B
[fish@hadoop101 data]$ /bin/bash exportTest.sh
[fish@hadoop101 data]$ echo $B
2
[fish@hadoop101 data]$ export B
[fish@hadoop101 data]$ /bin/bash exportTest.sh
2
1. $n
n为数字,$0
代表该脚本名称,$1
~$9
代表第1到第9个参数,两位数以上用${10}
[fish@hadoop101 data]$ vim parameter.sh
#!/bin/bash
echo "$0"
echo "$1 $2 $3 $4 $5 $6"
echo "$7 $8 $9 ${10}"
[fish@hadoop101 data]$ /bin/bash parameter.sh to be or not to be that is a question
parameter.sh
to be or not to be
that is a question
2. $#
用于获取所有输入参数个数(脚本文件本身不算作参数),便于循环。
[fish@hadoop101 data]$ vim parameter.sh
#!/bin/bash
echo "$0"
echo "$1 $2 $3 $4 $5 $6"
echo "$7 $8 $9 ${10}"
echo "The number of input parameters is: $#"
[fish@hadoop101 data]$ /bin/bash parameter.sh to be or not to be that is a question
parameter.sh
to be or not to be
that is a question
The number of input parameters is: 10
3. $*、$@
两者都代表命令行中所有的参数,前者是一个整体,后者可以区别对待每个参数。(具体区别请看流程控制部分)
[fish@hadoop101 data]$ vim parameter.sh
#!/bin/bash
echo "$*"
echo "$@"
[fish@hadoop101 data]$ /bin/bash parameter.sh to be or not to be, that is a question
to be or not to be, that is a question
to be or not to be, that is a question
4. $?
表示最后一次执行的命令的返回状态,0表示上一次命令正确执行。
[fish@hadoop101 data]$ unset B
-bash: unset: B: 无法反设定: 只读 variable
[fish@hadoop101 data]$ echo $?
1
[fish@hadoop101 data]$ echo $B
2
[fish@hadoop101 data]$ echo $?
0
1. $((运算式)) 和 $[运算式]
[fish@hadoop101 data]$ echo $((1+2))
3
[fish@hadoop101 data]$ echo $[2+3]
5
2. expr
要留意空格,且一个expr
只管一个运算符,乘号用\*
,一次结果还需要**``**括起来。不推荐用这种方式。
[fish@hadoop101 data]$ expr 1 + 2
3
[fish@hadoop101 data]$ expr `expr 1 + 2` \* 3
9
1. test condition
[fish@hadoop101 data]$ test 1 -lt 2
[fish@hadoop101 data]$ echo $?
0
[fish@hadoop101 data]$ test 1 -gt 2
[fish@hadoop101 data]$ echo $?
1
2. [ condition ]
&&
表示前一条命令执行成功时,才执行后一条命令||
表示上一条命令执行失败后,才执行下一条命令[fish@hadoop101 data]$ [ -e ~/data/helloworld.txt ] && echo OK
OK
[fish@hadoop101 data]$ echo $?
0
[fish@hadoop101 data]$ [ -e ~/data/helloworld.txt ] && [ ] || echo not OK
not OK
[fish@hadoop101 data]$ echo $?
0
[fish@hadoop101 data]$ vim ifTest.sh
#!/bin/bash
if [ $1 -eq 0 ]
then
echo "I am a girl."
elif [ $1 -eq 1 ]
then
echo "I am a boy."
fi
if [ $1 -ge 2 ];then
echo "That is a question."
fi
[fish@hadoop101 data]$ /bin/bash ifTest.sh 0
I am a girl.
[fish@hadoop101 data]$ /bin/bash ifTest.sh 1
I am a boy.
[fish@hadoop101 data]$ /bin/bash ifTest.sh 2
That is a question.
[fish@hadoop101 data]$ vim caseTest.sh
#!/bin/bash
case $1 in
"0")
echo "Female"
;;
"1")
echo "Male"
;;
*)
echo "null"
;;
esac
[fish@hadoop101 data]$ chmod 777 caseTest.sh
[fish@hadoop101 data]$ ./caseTest.sh 0
Female
[fish@hadoop101 data]$ ./caseTest.sh 1
Male
[fish@hadoop101 data]$ ./caseTest.sh 2
null
1. 类似下标遍历for
[fish@hadoop101 data]$ vim forTest.sh
#!/bin/bash
Sum=0
for ((i=1;i<=100;i++))
do
Sum=$[$Sum+$i]
done
echo $Sum
[fish@hadoop101 data]$ bash forTest.sh
5050
2. 类似增强for
可以看出$@
和$*
区别
[fish@hadoop101 data]$ vim forTest.sh
#!/bin/bash
for word in "$*"
do
echo "I was $word"
done
for word in "$@"
do
echo "I am $word."
done
[fish@hadoop101 data]$ bash forTest.sh study eat sleep
I was study eat sleep
I am study.
I am eat.
I am sleep.
[fish@hadoop101 data]$ vim whileTest.sh
#!/bin/bash
sum=0
i=1
while [ $i -le 100 ]
do
sum=$[$sum+$i]
i=$[$i+1]
done
echo $sum
[fish@hadoop101 data]$ bash whileTest.sh
5050
-p
提示符-t
等待的时间(s)[fish@hadoop101 data]$ vim readTest.sh
#!/bin/bash
read -p "Name: " -t 7 name
echo "My name is $name."
[fish@hadoop101 data]$ bash readTest.sh
Name: fish
My name is fish.
1. basename(掐头)
[fish@hadoop101 data]$ basename $PWD/helloworld.txt
helloworld.txt
[fish@hadoop101 data]$ basename $PWD/helloworld.txt .txt
helloworld
2. dirname(去尾)
[fish@hadoop101 data]$ dirname $PWD/helloworld.txt
/home/fish/data
[fish@hadoop101 data]$ vim twoSum.sh
#!/bin/bash
function twoSum
{
echo "$1 + $2 = $[$1+$2]"
}
read -p "first number: " -t 10 n1
read -p "second number: " -t 10 n2
twoSum $n1 $n2
[fish@hadoop101 data]$ bash twoSum.sh
first number: 4
second number: 6
4 + 6 = 10
-f
列号,从1开始-d
指定分隔符1. 切割指定列
[fish@hadoop101 data]$ vim cut.txt
to be or not to be
that is a question
dong guan
shen zhen
wo lai le
my name is fish
[fish@hadoop101 data]$ cut cut.txt -f 1 -d " "
to
that
dong
shen
wo
my
[fish@hadoop101 data]$ cut cut.txt -f 2,3 -d " "
be or
is a
guan
zhen
lai le
name is
2. 切割指定单词
[fish@hadoop101 data]$ cat cut.txt | grep that
that is a question
[fish@hadoop101 data]$ cat cut.txt | grep that | cut -d " " -f 1
that
3. 系统PATH变量
[fish@hadoop101 data]$ echo $PATH
/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/fish/.local/bin:/home/fish/bin
[fish@hadoop101 data]$ echo $PATH | cut -d ":" -f 2-
/usr/bin:/usr/local/sbin:/usr/sbin:/home/fish/.local/bin:/home/fish/bin
4. 切割ifconfig,打印IP地址
[fish@hadoop101 data]$ ifconfig ens33
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.10.142 netmask 255.255.255.0 broadcast 192.168.10.255
inet6 fe80::6609:f543:593c:4102 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:ee:79:fc txqueuelen 1000 (Ethernet)
RX packets 21178 bytes 1554497 (1.4 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 9561 bytes 1155721 (1.1 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[fish@hadoop101 data]$ ifconfig ens33 | grep "inet " | cut -d 't' -f 2 | cut -d ' ' -f 2
192.168.10.142
stream editor
a
addd
deletes
search and replace1. 在某一行中插入新行
[fish@hadoop101 data]$ vim sed.txt
Bei jing
Shang hai
Dong guan
[fish@hadoop101 data]$ sed "2a Shen zhen" sed.txt
Bei jing
Shang hai
Shen zhen
Dong guan
[fish@hadoop101 data]$ cat sed.txt
Bei jing
Shang hai
Dong guan
2. 删除含有某字符串的行
[fish@hadoop101 data]$ sed "/hai/d" sed.txt
Bei jing
Dong guan
3. 辞旧迎新
g = global
表示全部替换;没有g
默认只替换第一个
[fish@hadoop101 data]$ sed "s/Dong guan/Cheng du/g" sed.txt
Bei jing
Shang hai
Cheng du
4. 多次操作
[fish@hadoop101 data]$ sed -e "2d" -e "s/Bei jing/China/g" sed.txt
China
Dong guan
请移步学习正则表达式
[fish@hadoop101 data]$ cp /etc/passwd ./
-F
指定分隔符
-v
variable 赋值一个用户定义变量
1. 搜索以root关键字开头的所有行,并输出指定列
输出一列
[fish@hadoop101 data]$ awk -F: '/^root/{print $7}' passwd
/bin/bash
输出两列,并指定分隔符
[fish@hadoop101 data]$ awk -F: '/^root/{print $1", " $7}' passwd
root, /bin/bash
2. 添油加醋
[fish@hadoop101 data]$ awk -F: 'BEGIN{print "user, shell"} {print $1", "$7} END{print "fish, /bin/fish"}' passwd
user, shell
root, /bin/bash
...
fish, /bin/bash
fish, /bin/fish
3. 将 passwd 文件中的用户 id 增加数值 1 并输出
[fish@hadoop101 data]$ awk -v i=1 -F: '{print $3+i}' passwd
1
2
3
...
75
90
1001
4. 输出 passwd 文件名,统计每行的行号和列数
FILENAME
文件名NR
recordNF
field[fish@hadoop101 data]$ awk -F: '{print "filename:" FILENAME ", linenumber:" NR ", columns:" NF}' passwd
filename:passwd, linenumber:1, columns:7
filename:passwd, linenumber:2, columns:7
...
filename:passwd, linenumber:18, columns:7
filename:passwd, linenumber:19, columns:7
5. 切割 IP
[fish@hadoop101 data]$ ifconfig ens33 | grep .*inet.*netmask | awk -F" " '{print $2}'
192.168.10.142
6. 查询空行所在的行号
[fish@hadoop101 data]$ cat ifTest.sh
#!/bin/bash
if [ $1 -eq 0 ]
then
echo "I am a girl."
elif [ $1 -eq 1 ]
then
echo "I am a boy."
fi
if [ $1 -ge 2 ];then
echo "That is a question."
fi
[fish@hadoop101 data]$ awk '/^$/{print NR}' ifTest.sh
2
10
将文件进行排序,并将排序结果标准输出。
-n
number 依照数值大小排序-r
reverse-t
分隔符-k
key[fish@hadoop101 data]$ vim sort.txt
Tom:40:2.4
Amy:30:4.5
John:50:3.2
Jim:20:5.0
Ann:35:2.8
[fish@hadoop101 data]$ sort -nr -t: -k2 sort.txt
John:50:3.2
Tom:40:2.4
Ann:35:2.8
Amy:30:4.5
Jim:20:5.0
可在grep
、sed
、awk
中练习
一串不包含特殊字符的正则表达式匹配它自己
[fish@hadoop101 data]$ cat passwd | grep fish
fish:x:1000:1000::/home/fish:/bin/bash
1. 匹配一行的开头 ^
[fish@hadoop101 data]$ cat passwd | grep ^root
root:x:0:0:root:/root:/bin/bash
2. 匹配一行的结束$
[fish@hadoop101 data]$ cat passwd | grep bash$
root:x:0:0:root:/root:/bin/bash
fish:x:1000:1000::/home/fish:/bin/bash
^$
匹配空行!
3. 匹配一个任意的字符 .
一个.
就是一个任意字符
[fish@hadoop101 data]$ cat passwd | grep r..t
root:x:0:0:root:/root:/bin/bash
operator:x:11:0:operator:/root:/sbin/nologin
ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin
4. 匹配上一个字符0次或多次
[fish@hadoop101 data]$ cat passwd | grep 10*
bin:x:1:1:bin:/bin:/sbin/nologin
mail:x:8:12:mail:/var/spool/mail:/sbin/nologin
operator:x:11:0:operator:/root:/sbin/nologin
games:x:12:100:games:/usr/games:/sbin/nologin
ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin
systemd-network:x:192:192:systemd Network Management:/:/sbin/nologin
dbus:x:81:81:System message bus:/:/sbin/nologin
fish:x:1000:1000::/home/fish:/bin/bash
观察可发现成功匹配1,10, 100, 1000。
5. 匹配某个范围内的一个字符 [ ]
[6, 8]
匹配6或者8[a-z]
匹配一个小写字母[a-z]*
小写字母可以出现0次或多次,所以是匹配任意小写字母字符串[a-c,e-f]
匹配a,b,c;e,f
[fish@hadoop101 data]$ cat passwd | grep r[a-c]*t
operator:x:11:0:operator:/root:/sbin/nologin
sshd:x:74:74:Privilege-separated SSH:/var/empty/sshd:/sbin/nologin
6. 转义字符 \
用于匹配特殊字符
[fish@hadoop101 data]$ cat whileTest.sh
#!/bin/bash
sum=0
i=1
while [ $i -le 100 ]
do
sum=$[$sum+$i]
i=$[$i+1]
done
echo $sum
[fish@hadoop101 data]$ cat whileTest.sh | grep \#
#!/bin/bash
[fish@hadoop101 data]$ cat whileTest.sh | grep \$i
while [ $i -le 100 ]
sum=$[$sum+$i]
i=$[$i+1]
注意:
\$
并不能凑效,会匹配以\
结尾