[转帖]/proc文件简介

proc,文件,简介 · 浏览次数 : 0

小编点评

# Generate Content **Kernel Version** Kernel version: 3.10.0-862.el7.x86_64 (mockbuild@x86-034.build.eng.bos.redhat.com) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-28) (GCC) ) **Boot Files and Sizes** * /boot/grub: 16384 bytes, contains GRUB boot file * /boot/init: 1848 bytes, contains GRUB init file * /boot/fs: 3016184 bytes, contains filesystem (ext4) * /boot/kernel: 16840 bytes, contains kernel * /boot/vmlinux: 82208 bytes, contains vmlinux image **Device Files and Sizes** * /dev/sda: 1024000 bytes, contains primary hard disk * /dev/sdb: 1024000 bytes, contains primary hard disk * /dev/sdc: 1024000 bytes, contains primary hard disk * /dev/sdd: 1024000 bytes, contains primary hard disk * /dev/sde: 1024000 bytes, contains primary hard disk * /dev/sdf: 1024000 bytes, contains primary hard disk **Other Files and Sizes** * /dev/broadcom: 2048 bytes, contains Broadcom Virtual SES enclosure * /dev/sda: 1024000 bytes, contains primary hard disk * /dev/sdb: 1024000 bytes, contains primary hard disk * /dev/sdc: 1024000 bytes, contains primary hard disk * /dev/sdd: 1024000 bytes, contains primary hard disk * /dev/sde: 1024000 bytes, contains primary hard disk * /dev/sdf: 1024000 bytes, contains primary hard disk

正文

https://www.jianshu.com/p/2610241770be

 

简介

/proc文件系统是一个伪文件系统,它只存在内存当中,而不占用外存空间。它以文件系统的方式为访问系统内核数据的操作提供接口。用户和应用程序可以通过 proc得到系统的信息,并可以改变内核的某些参数。由于系统的信息,如进程,是动态改变的,所以用户或应用程序读取proc文件时,proc文件系统是 动态从系统内核读出所需信息并提交的。

Linux系统上的/proc目录是一种文件系统,即proc文件系统。与其它常见的文件系统不同的是,/proc是一种伪文件系统(也即虚拟文件系 统),存储的是当前内核运行状态的一系列特殊文件,用户可以通过这些文件查看有关系统硬件及当前正在运行进程的信息,甚至可以通过更改其中某些文件来改变 内核的运行状态。

基于/proc文件系统如上所述的特殊性,其内的文件也常被称作虚拟文件,并具有一些独特的特点。例如,其中有些文件虽 然使用查看命令查看时会返回大量信息,但文件本身的大小却会显示为0字节。此外,这些特殊文件中大多数文件的时间及日期属性通常为当前系统时间和日期,这 跟它们随时会被刷新(存储于RAM中)有关。

为了查看及使用上的方便,这些文件通常会按照相关性进行分类存储于不同的目录甚至子目录中, 如/proc/scsi目录中存储的就是当前系统上所有SCSI设备的相关信息,/proc/N中存储的则是系统当前正在运行的进程的相关信息,其中N为 正在运行的进程(可以想象得到,在某进程结束后其相关目录则会消失)。

大多数虚拟文件可以使用文件查看命令如cat、more或者less进行查看,有些文件信息表述的内容可以一目了然,但也有文件的信息却不怎么具有可读性。不过,这些可读性较差的文件在使用一些命令如apm、free、lspci或top查看时却可以有着不错的表现

文件大小是0,如下所示;

[root@localhost ~]# ls -lh / |grep proc

dr-xr-xr-x. 653 root root    0 Aug 26 16:27 proc

文件列表

 

/proc/acpi文件

/proc/buddyinfo

用于诊断内存碎片问题的相关信息文件;

# cat /proc/buddyinfo

Node 0, zone      DMA      0      0      1      1      1      0      0      0      1      1      3

Node 0, zone    DMA32    372    207    80    111    39    13      2      2      0      3    591

Node 0, zone  Normal    227    39    33    88    446    289    126    59    18    34  10805

Node 1, zone  Normal    101    88  1114  1873  1590  1031    820    652    691    694  10362

/proc/cmdline文件

这个文件给出了内核启动的命令行。它和用于进程的cmdline项非常相似,如下所示:

[root@localhost ~]# cat /proc/cmdline

BOOT_IMAGE=/vmlinuz-3.10.0-862.el7.x86_64 root=/dev/mapper/rhel00-root ro crashkernel=auto noapic rd.lvm.lv=rhel00/root biosdevname=1 rd.lvm.lv=rhel00/swap rhgb quiet

/proc/cpuinfo文件

前言

在linux系统中,提供了/proc目录下文件,显示系统的软硬件信息。如果想了解系统中CPU的提供商和相关配置信息,则可以查/proc/cpuinfo。但是此文件输出项较多,不易理解。例如我们想获取,有多少颗物理CPU,每个物理cpu核心数,以及超线程是否开启等信息,下面我们就看来一步一步的去探索。

概念

①物理CPU数(physical id):主板上实际插入的cpu数量,可以数不重复的 physical id 有几个

②CPU核心数(cpu cores):单块CPU上面能处理数据的芯片组的数量,如双核、四核等

③逻辑CPU数:一般情况下,

  逻辑CPU=物理CPU个数×每颗核数      #不支持超线程技术或没有开启次技术

  逻辑CPU=物理CPU个数×每颗核数 *2     #表示服务器的CPU支持超线程技术(简单来说,它可使处理器中的1 颗内核如2 颗内核那样在操作系统中发挥作用。这样一来,操作系统可使用的执行资源扩大了一倍,大幅提高了系统的整体性能)

内容的项目解读

下面对上图的输出内容进行相应解释

processor :系统中逻辑处理核心数的编号,从0开始排序。

vendor_id :CPU制造商

cpu family :CPU产品系列代号

model   :CPU属于其系列中的哪一代的代号

model name:CPU属于的名字及其编号、标称主频

stepping  :CPU属于制作更新版本

cpu MHz  :CPU的实际使用主频

cache size :CPU二级缓存大小

physical id :单个物理CPU的标号

siblings :单个物理CPU的逻辑CPU数。siblings=cpu cores [*2]。

core id :当前物理核在其所处CPU中的编号,这个编号不一定连续。

cpu cores :该逻辑核所处CPU的物理核数。比如此处cpu cores 是4个,那么对应core id 可能是 1、3、4、5。

apicid :用来区分不同逻辑核的编号,系统中每个逻辑核的此编号必然不同,此编号不一定连续

fpu :是否具有浮点运算单元(Floating Point Unit)

fpu_exception :是否支持浮点计算异常

cpuid level :执行cpuid指令前,eax寄存器中的值,根据不同的值cpuid指令会返回不同的内容

wp :表明当前CPU是否在内核态支持对用户空间的写保护(Write Protection)

flags :当前CPU支持的功能

bogomips:在系统内核启动时粗略测算的CPU速度(Million Instructions Per Second

clflush size :每次刷新缓存的大小单位

cache_alignment :缓存地址对齐单位

address sizes :可访问地址空间位数

power management :对能源管理的支持

快速查询想要获取的信息

查询系统有几颗物理CPU

[root@localhost ~]# cat /proc/cpuinfo |grep "physical id"|sort|uniq

physical id    : 0

physical id    : 1

查询系统每颗物理CPU的核心数量

[root@localhost ~]# cat /proc/cpuinfo |grep "cpu cores"|uniq     

cpu cores      : 16

查询系统的每颗CPU是否启用了超线程技术,如果启用了此技术,每个物理核心又可分为两个逻辑处理器。

[root@localhost ~]# cat /proc/cpuinfo |grep -e "cpu cores" -e "siblings"|sort|uniq

cpu cores      : 16

siblings        : 32

查询系统具有多少个逻辑CPU

[root@localhost ~]# cat /proc/cpuinfo|grep "processor"|wc -l

64

Many of the virtual files in the /proc directory contain information about hardware and configurations. Here are some of them

/proc/devices文件

This file displays the various character and block devices currently configured (not including devices whose modules are not loaded). Below is a sample output from this file:

/proc/dma

# cat /proc/dma

4: cascade

/proc/driver

This directory contains information for specific drivers in use by the kernel.

A common file found here is rtc which provides output from the driver for the system's Real Time Clock (RTC), the device that keeps the time while the system is switched off. Sample output from /proc/driver/rtc looks like the following:

# cat /proc/driver/rtc

rtc_time        : 12:07:17

rtc_date        : 2022-09-05

alrm_time      : 07:31:36

alrm_date      : 2022-08-27

alarm_IRQ      : no

alrm_pending    : no

update IRQ enabled      : no

periodic IRQ enabled    : no

periodic IRQ frequency  : 1024

max user IRQ frequency  : 64

24hr            : yes

periodic_IRQ    : no

update_IRQ      : no

HPET_emulated  : yes

BCD            : yes

DST_enable      : no

periodic_freq  : 1024

batt_status    : okay

For more information about the RTC, see the following installed documentation:

/usr/share/doc/kernel-doc-<kernel_version>/Documentation/rtc.txt

# find / -name  rtc

/dev/rtc

/proc/driver/rtc

/sys/devices/pnp0/00:00/rtc

/sys/class/rtc

/usr/lib/modules/3.10.0-862.el7.x86_64/kernel/drivers/rtc

/usr/src/kernels/3.10.0-862.el7.x86_64/drivers/rtc

/usr/src/kernels/3.10.0-862.el7.x86_64/include/config/rtc

/usr/src/kernels/3.10.0-862.el7.x86_64/include/linux/rtc

/proc/filesystem

This file displays a list of the file system types currently supported by the kernel. Sample output from a generic /proc/filesystems file looks similar to the following:

# cat  /proc/filesystems

nodev  sysfs

nodev  rootfs

nodev  ramfs

nodev  bdev

nodev  proc

nodev  cgroup

nodev  cpuset

nodev  tmpfs

nodev  devtmpfs

nodev  debugfs

nodev  securityfs

nodev  sockfs

nodev  dax

nodev  pipefs

nodev  anon_inodefs

nodev  configfs

nodev  devpts

nodev  hugetlbfs

nodev  autofs

nodev  pstore

nodev  efivarfs

nodev  mqueue

nodev  selinuxfs

nodev  resctrl

        xfs

nodev  rpc_pipefs

        vfat

nodev  binfmt_misc

# df -Th

Filesystem              Type      Size  Used Avail Use% Mounted on

/dev/mapper/rhel00-root xfs        50G  16G  35G  32% /

devtmpfs                devtmpfs  47G    0  47G  0% /dev

tmpfs                  tmpfs      47G  4.0K  47G  1% /dev/shm

tmpfs                  tmpfs      47G  20M  47G  1% /run

tmpfs                  tmpfs      47G    0  47G  0% /sys/fs/cgroup

/dev/sdd2              xfs      1014M  162M  853M  16% /boot

/dev/sdd1              vfat      200M  9.8M  191M  5% /boot/efi

/dev/mapper/rhel00-home xfs      504G  55M  504G  1% /home

tmpfs                  tmpfs    9.4G  12K  9.4G  1% /run/user/42

tmpfs                  tmpfs    9.4G    0  9.4G  0% /run/user/0

The first column signifies whether the file system is mounted on a block device. Those beginning with nodev are not mounted on a device. The second column lists the names of the file systems supported.

The mount command cycles through the file systems listed here when one is not specified as an argument.

/proc/fs

This directory shows which file systems are exported. If running an NFS server, typing cat /proc/fs/nfsd/exports displays the file systems being shared and the permissions granted for those file systems. For more on file system sharing with NFS, see the Network File System (NFS) chapter of the Storage Administration Guide.

[root@localhost ~]# ls -l /proc/fs

total 0

dr-xr-xr-x. 2 root root 2 Sep  5 20:22 nfsd

dr-xr-xr-x. 2 root root 0 Sep  5 20:22 xfs

[root@localhost ~]# ls -l /proc/fs/xfs

total 0

lrwxrwxrwx. 1 root root 23 Sep  5 20:22 stat -> /sys/fs/xfs/stats/stats

-r--r--r--. 1 root root  0 Sep  5 20:22 xqm

-r--r--r--. 1 root root  0 Sep  5 20:22 xqmstat

[root@localhost ~]# ls -l /proc/fs/nfsd

total 0

/proc/interrupts

/proc/iomem

This file shows you the current map of the system's memory for each physical device:

# cat /proc/iomem

00000000-00000fff : reserved

00001000-0009ffff : System RAM

000a0000-000fffff : reserved

  000a0000-000bffff : PCI Bus 0000:00

  000c4000-000c7fff : PCI Bus 0000:00

  000f0000-000fffff : System ROM

00100000-5f23f017 : System RAM

  2d000000-375fffff : Crash kernel

5f23f018-5f276c57 : System RAM

5f276c58-5f277017 : System RAM

5f277018-5f2aec57 : System RAM

5f2aec58-5f2af017 : System RAM

5f2af018-5f2e6c57 : System RAM

5f2e6c58-5f2e7017 : System RAM

5f2e7018-5f31ec57 : System RAM

5f31ec58-5f31f017 : System RAM

5f31f018-5f34ec57 : System RAM

5f34ec58-5f34f017 : System RAM

5f34f018-5f357057 : System RAM

5f357058-5f358017 : System RAM

5f358018-5f35f657 : System RAM

5f35f658-a5c03fff : System RAM

a5c04000-a61dffff : ACPI Tables

a61e0000-a6dd5fff : ACPI Non-volatile Storage

a6dd6000-a7e40fff : System RAM

a7e41000-aaf1dfff : reserved

  a7f8f018-a7f8f018 : APEI ERST

  a7f8f01c-a7f8f021 : APEI ERST

  a7f8f028-a7f8f039 : APEI ERST

  a7f8f040-a7f8f04c : APEI ERST

  a7f8f050-a7f9104f : APEI ERST

aaf1e000-afbebfff : System RAM

afbec000-afc1dfff : reserved

afc1e000-afffffff : System RAM

b0000000-cfffffff : reserved

  c0000000-cfffffff : PCI MMCONFIG 0000 [bus 00-ff]

d0000000-d1ffffff : PCI Bus 0000:00

  d0000000-d1bfffff : PCI Bus 0000:01

    d0000000-d1afffff : PCI Bus 0000:02

      d0000000-d0ffffff : 0000:02:00.0

        d0000000-d0ffffff : mgadrmfb_vram

      d1000000-d17fffff : 0000:02:00.0

      d1a00000-d1a0ffff : 0000:02:00.0

      d1a10000-d1a13fff : 0000:02:00.0

        d1a10000-d1a13fff : mgadrmfb_mmio

    d1b00000-d1b00fff : 0000:01:00.0

  d1c00000-d1c7ffff : 0000:00:17.0

    d1c00000-d1c7ffff : ahci

  d1c80000-d1cfffff : 0000:00:11.5

    d1c80000-d1cfffff : ahci

  d1d10000-d1d13fff : 0000:00:1f.2

  d1d14000-d1d15fff : 0000:00:17.0

    d1d14000-d1d15fff : ahci

  d1d16000-d1d17fff : 0000:00:11.5

    d1d16000-d1d17fff : ahci

  d1d18000-d1d180ff : 0000:00:17.0

    d1d18000-d1d180ff : ahci

  d1d19000-d1d190ff : 0000:00:11.5

    d1d19000-d1d190ff : ahci

  d1d1a000-d1d1afff : 0000:00:05.4

d2000000-d2ffffff : PCI Bus 0000:05

  d2d00000-d2efffff : PCI Bus 0000:06

    d2d00000-d2dfffff : 0000:06:00.0

      d2d00000-d2dfffff : mpt3sas

    d2e00000-d2e3ffff : 0000:06:00.0

  d2f00000-d2f00fff : 0000:05:05.4

d3000000-d33fffff : PCI Bus 0000:2e

  d3300000-d3300fff : 0000:2e:05.4

d3400000-e3ffffff : PCI Bus 0000:57

  e3b00000-e3efffff : PCI Bus 0000:58

    e3b00000-e3bfffff : 0000:58:00.0

    e3c00000-e3dfffff : PCI Bus 0000:59

      e3c00000-e3dfffff : PCI Bus 0000:5a

        e3c00000-e3c7ffff : 0000:5a:00.3

        e3c80000-e3cfffff : 0000:5a:00.2

        e3d00000-e3d7ffff : 0000:5a:00.1

        e3d80000-e3dfffff : 0000:5a:00.0

    e3e00000-e3e1ffff : 0000:58:00.0

  e3f00000-e3f00fff : 0000:57:05.4

e4000000-e8ffffff : PCI Bus 0000:80

  e8f00000-e8f00fff : 0000:80:05.4

e9000000-edffffff : PCI Bus 0000:85

  edb00000-edbfffff : PCI Bus 0000:89

  edc00000-edcfffff : PCI Bus 0000:88

  edd00000-eddfffff : PCI Bus 0000:87

  ede00000-edefffff : PCI Bus 0000:86

  edf00000-edf00fff : 0000:85:05.4

ee000000-f2ffffff : PCI Bus 0000:ae

  f2f00000-f2f00fff : 0000:ae:05.4

f3000000-f7ffffff : PCI Bus 0000:d7

  f7f00000-f7f00fff : 0000:d7:05.4

fd000000-fe7fffff : reserved

  fd000000-fdabffff : pnp 00:08

  fdad0000-fdadffff : pnp 00:08

  fdb00000-fdffffff : pnp 00:08

    fdc6000c-fdc6000f : iTCO_wdt

  fe000000-fe00ffff : pnp 00:08

  fe010000-fe010fff : PCI Bus 0000:00

    fe010000-fe010fff : 0000:00:1f.5

  fe011000-fe01ffff : pnp 00:08

  fe036000-fe03bfff : pnp 00:08

  fe03d000-fe3fffff : pnp 00:08

  fe410000-fe7fffff : pnp 00:08

fec00000-fecfffff : PNP0003:00

fed00000-fed003ff : HPET 0

  fed00000-fed003ff : PNP0103:00

fed12000-fed1200f : pnp 00:01

fed12010-fed1201f : pnp 00:01

fed1b000-fed1bfff : pnp 00:01

fed20000-fed44fff : reserved

fed45000-fed8bfff : pnp 00:01

fee00000-feefffff : pnp 00:01

  fee00000-fee00fff : Local APIC

ff000000-ffffffff : reserved

  ff000000-ffffffff : pnp 00:01

100000000-183fffffff : System RAM

  1493600000-1493d27306 : Kernel code

  1493d27307-149434f8bf : Kernel data

  1494522000-149482cfff : Kernel bss

20000000000-20fffffffff : PCI Bus 0000:00

  20000000000-200000000ff : 0000:00:1f.4

  20ffff00000-20ffff0ffff : 0000:00:14.0

    20ffff00000-20ffff0ffff : xhci-hcd

  20ffff10000-20ffff13fff : 0000:00:04.7

  20ffff14000-20ffff17fff : 0000:00:04.6

  20ffff18000-20ffff1bfff : 0000:00:04.5

  20ffff1c000-20ffff1ffff : 0000:00:04.4

  20ffff20000-20ffff23fff : 0000:00:04.3

  20ffff24000-20ffff27fff : 0000:00:04.2

  20ffff28000-20ffff2bfff : 0000:00:04.1

  20ffff2c000-20ffff2ffff : 0000:00:04.0

  20ffff30000-20ffff30fff : 0000:00:16.4

  20ffff31000-20ffff31fff : 0000:00:16.1

  20ffff32000-20ffff32fff : 0000:00:16.0

    20ffff32000-20ffff32fff : mei_me

  20ffff33000-20ffff33fff : 0000:00:14.2

21000000000-21fffffffff : PCI Bus 0000:05

  21fffe00000-21fffffffff : PCI Bus 0000:06

    21fffe00000-21fffefffff : 0000:06:00.0

      21fffe00000-21fffefffff : mpt3sas

    21ffff00000-21fffffffff : 0000:06:00.0

      21ffff00000-21fffffffff : mpt3sas

22000000000-22fffffffff : PCI Bus 0000:2e

23000000000-23fffffffff : PCI Bus 0000:57

  23ffb000000-23fff0fffff : PCI Bus 0000:58

    23ffb000000-23fff0fffff : PCI Bus 0000:59

      23ffb000000-23fff0fffff : PCI Bus 0000:5a

        23ffb000000-23ffbffffff : 0000:5a:00.3

          23ffb000000-23ffbffffff : i40e

        23ffc000000-23ffcffffff : 0000:5a:00.2

          23ffc000000-23ffcffffff : i40e

        23ffd000000-23ffdffffff : 0000:5a:00.1

          23ffd000000-23ffdffffff : i40e

        23ffe000000-23ffeffffff : 0000:5a:00.0

          23ffe000000-23ffeffffff : i40e

        23fff000000-23fff007fff : 0000:5a:00.3

          23fff000000-23fff007fff : i40e

        23fff008000-23fff00ffff : 0000:5a:00.2

          23fff008000-23fff00ffff : i40e

        23fff010000-23fff017fff : 0000:5a:00.1

          23fff010000-23fff017fff : i40e

        23fff018000-23fff01ffff : 0000:5a:00.0

          23fff018000-23fff01ffff : i40e

24000000000-24fffffffff : PCI Bus 0000:80

  24ffff00000-24ffff03fff : 0000:80:04.7

  24ffff04000-24ffff07fff : 0000:80:04.6

  24ffff08000-24ffff0bfff : 0000:80:04.5

  24ffff0c000-24ffff0ffff : 0000:80:04.4

  24ffff10000-24ffff13fff : 0000:80:04.3

  24ffff14000-24ffff17fff : 0000:80:04.2

  24ffff18000-24ffff1bfff : 0000:80:04.1

  24ffff1c000-24ffff1ffff : 0000:80:04.0

25000000000-25fffffffff : PCI Bus 0000:85

  25fffc00000-25fffcfffff : PCI Bus 0000:89

  25fffd00000-25fffdfffff : PCI Bus 0000:88

  25fffe00000-25fffefffff : PCI Bus 0000:87

  25ffff00000-25fffffffff : PCI Bus 0000:86

26000000000-26fffffffff : PCI Bus 0000:ae

27000000000-27fffffffff : PCI Bus 0000:d7

/proc/ioports

The output of /proc/ioports provides a list of currently registered port regions used for input or output communication with a device. This file can be quite long.

/proc/kmsg

This file is used to hold messages generated by the kernel. These messages are then picked up by other programs, such as /sbin/klogd or /bin/dmesg.

/proc/loadavg

This file provides a look at the load average in regard to both the CPU and IO over time, as well as additional data used by uptime and other commands. A sample /proc/loadavg file looks similar to the following:

# cat /proc/loadavg

0.00 0.01 0.05 1/934 408931

[root@localhost ~]# uptime

20:39:36 up 10 days,  4:12,  1 user,  load average: 0.00, 0.01, 0.05

The first three columns measure CPU and IO utilization of the last one, five, and 15 minute periods. The fourth column shows the number of currently running processes and the total number of processes. The last column displays the last process ID used.

In addition, load average also refers to the number of processes ready to run (i.e. in the run queue, waiting for a CPU share.

/proc/mdstat

This file contains the current information for multiple-disk, RAID configurations. If the system does not contain such a configuration, then /proc/mdstat looks similar to the following:

Personalities :  read_ahead not set unused devices: <none>

This file remains in the same state as seen above unless a software RAID or md device is present. In that case, view /proc/mdstat to find the current status of mdX RAID devices.

The /proc/mdstat file below shows a system with its md0 configured as a RAID 1 device, while it is currently re-syncing the disks:

Personalities : [linear] [raid1] read_ahead 1024 sectors

md0: active raid1 sda2[1] sdb2[0] 9940 blocks [2/2] [UU] resync=1% finish=12.3min algorithm 2 [3/3] [UUU]

unused devices: <none>

/proc/meminfo

This is one of the more commonly used files in the /proc/ directory, as it reports a large amount of valuable information about the system's RAM usage.

The following sample /proc/meminfo virtual file is from a system with 2 GB of RAM and 1 GB of swap space:

While the file shows kilobytes (kB; 1 kB equals 1000 B), it is actually kibibytes (KiB; 1 KiB equals 1024 B). This imprecision in /proc/meminfo is known, but is not corrected due to legacy concerns - programs rely on /proc/meminfo to specify size with the "kB" string.

Much of the information in /proc/meminfo is used by the free, top, and ps commands. In fact, the output of the free command is similar in appearance to the contents and structure of /proc/meminfo. However, /proc/meminfo itself has more details:

MemTotal — Total amount of usable RAM, in kibibytes, which is physical RAM minus a number of reserved bits and the kernel binary code.

MemFree — The amount of physical RAM, in kibibytes, left unused by the system.

Buffers — The amount, in kibibytes, of temporary storage for raw disk blocks.

Cached — The amount of physical RAM, in kibibytes, used as cache memory.

SwapCached — The amount of memory, in kibibytes, that has once been moved into swap, then back into the main memory, but still also remains in the swapfile. This saves I/O, because the memory does not need to be moved into swap again.

Active — The amount of memory, in kibibytes, that has been used more recently and is usually not reclaimed unless absolutely necessary.

Inactive — The amount of memory, in kibibytes, that has been used less recently and is more eligible to be reclaimed for other purposes.

Active(anon) — The amount of anonymous and tmpfs/shmem memory, in kibibytes, that is in active use, or was in active use since the last time the system moved something to swap.

Inactive(anon) — The amount of anonymous and tmpfs/shmem memory, in kibibytes, that is a candidate for eviction.

Active(file) — The amount of file cache memory, in kibibytes, that is in active use, or was in active use since the last time the system reclaimed memory.

Inactive(file) — The amount of file cache memory, in kibibytes, that is newly loaded from the disk, or is a candidate for reclaiming.

Unevictable — The amount of memory, in kibibytes, discovered by the pageout code, that is not evictable because it is locked into memory by user programs.

Mlocked — The total amount of memory, in kibibytes, that is not evictable because it is locked into memory by user programs.

SwapTotal — The total amount of swap available, in kibibytes.

SwapFree — The total amount of swap free, in kibibytes.

Dirty — The total amount of memory, in kibibytes, waiting to be written back to the disk.

Writeback — The total amount of memory, in kibibytes, actively being written back to the disk.

AnonPages — The total amount of memory, in kibibytes, used by pages that are not backed by files and are mapped into userspace page tables.

Mapped — The memory, in kibibytes, used for files that have been mmaped, such as libraries.

Shmem — The total amount of memory, in kibibytes, used by shared memory (shmem) and tmpfs.

Slab — The total amount of memory, in kibibytes, used by the kernel to cache data structures for its own use.

SReclaimable — The part of Slab that can be reclaimed, such as caches.

SUnreclaim — The part of Slab that cannot be reclaimed even when lacking memory.

KernelStack — The amount of memory, in kibibytes, used by the kernel stack allocations done for each task in the system.

PageTables — The total amount of memory, in kibibytes, dedicated to the lowest page table level.

NFS_Unstable — The amount, in kibibytes, of NFS pages sent to the server but not yet committed to the stable storage.

Bounce — The amount of memory, in kibibytes, used for the block device "bounce buffers".

WritebackTmp — The amount of memory, in kibibytes, used by FUSE for temporary writeback buffers.

CommitLimit — The total amount of memory currently available to be allocated on the system based on the overcommit ratio (vm.overcommit_ratio). This limit is only adhered to if strict overcommit accounting is enabled (mode 2 in vm.overcommit_memory). CommitLimit is calculated with the following formula:

([total RAM pages] - [total huge TLB pages]) * overcommit_ratio

───────────────────────────────────────────────────────────────── + [total swap pages]

                          100

For example, on a system with 1 GB of physical RAM and 7 GB of swap with a vm.overcommit_ratio of 30 it would yield a CommitLimit of 7.3 GB.

Committed_AS — The total amount of memory, in kibibytes, estimated to complete the workload. This value represents the worst case scenario value, and also includes swap memory.

VMallocTotal — The total amount of memory, in kibibytes, of total allocated virtual address space.

VMallocUsed — The total amount of memory, in kibibytes, of used virtual address space.

VMallocChunk — The largest contiguous block of memory, in kibibytes, of available virtual address space.

HardwareCorrupted — The amount of memory, in kibibytes, with physical memory corruption problems, identified by the hardware and set aside by the kernel so it does not get used.

AnonHugePages — The total amount of memory, in kibibytes, used by huge pages that are not backed by files and are mapped into userspace page tables.

HugePages_Total — The total number of hugepages for the system. The number is derived by dividing Hugepagesize by the megabytes set aside for hugepages specified in /proc/sys/vm/hugetlb_pool. This statistic only appears on the x86, Itanium, and AMD64 architectures.

HugePages_Free — The total number of hugepages available for the system. This statistic only appears on the x86, Itanium, and AMD64 architectures.

HugePages_Rsvd — The number of unused huge pages reserved for hugetlbfs.

HugePages_Surp — The number of surplus huge pages.

Hugepagesize — The size for each hugepages unit in kibibytes. By default, the value is 4096 KB on uniprocessor kernels for 32 bit architectures. For SMP, hugemem kernels, and AMD64, the default is 2048 KB. For Itanium architectures, the default is 262144 KB. This statistic only appears on the x86, Itanium, and AMD64 architectures.

DirectMap4k — The amount of memory, in kibibytes, mapped into kernel address space with 4 kB page mappings.

DirectMap2M — The amount of memory, in kibibytes, mapped into kernel address space with 2 MB page mappings.

/proc/modules

This file displays a list of all modules loaded into the kernel. Its contents vary based on the configuration and use of your system, but it should be organized in a similar manner to this sample /proc/modules file output:

# cat /proc/modules |grep -iE "sas|i40e|scsi"

iscsi_target_mod 291661 1 ib_isert, Live 0xffffffffc06c2000

libiscsi 57233 1 ib_iser, Live 0xffffffffc056c000

scsi_transport_iscsi 99909 2 ib_iser,libiscsi, Live 0xffffffffc05d9000

target_core_mod 340729 3 ib_isert,iscsi_target_mod,ib_srpt, Live 0xffffffffc0584000

scsi_transport_srp 20993 1 ib_srp, Live 0xffffffffc0488000

scsi_tgt 20027 1 scsi_transport_srp, Live 0xffffffffc047d000

mpt3sas 357803 3 - Live 0xffffffffc02e0000 (OE)

i40e 464708 0 - Live 0xffffffffc0114000 (OE)

ptp 19231 1 i40e, Live 0xffffffffc00b9000

raid_class 13554 1 mpt3sas, Live 0xffffffffc005c000

scsi_transport_sas 41224 2 ses,mpt3sas, Live 0xffffffffc0073000

The first column contains the name of the module.

The second column refers to the memory size of the module, in bytes.

The third column lists how many instances of the module are currently loaded. A value of zero represents an unloaded module.

The fourth column states if the module depends upon another module to be present in order to function, and lists those other modules.

The fifth column lists what load state the module is in: Live, Loading, or Unloading are the only possible values.

The sixth column lists the current kernel memory offset for the loaded module. This information can be useful for debugging purposes, or for profiling tools such as oprofile.

/proc/mounts

This file provides a list of all mounts in use by the system:

# cat /proc/mounts

rootfs / rootfs rw 0 0

sysfs /sys sysfs rw,seclabel,nosuid,nodev,noexec,relatime 0 0

proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0

devtmpfs /dev devtmpfs rw,seclabel,nosuid,size=49206472k,nr_inodes=12301618,mode=755 0 0

securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0

tmpfs /dev/shm tmpfs rw,seclabel,nosuid,nodev 0 0

devpts /dev/pts devpts rw,seclabel,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0

tmpfs /run tmpfs rw,seclabel,nosuid,nodev,mode=755 0 0

tmpfs /sys/fs/cgroup tmpfs ro,seclabel,nosuid,nodev,noexec,mode=755 0 0

cgroup /sys/fs/cgroup/systemd cgroup rw,seclabel,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd 0 0

pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0

efivarfs /sys/firmware/efi/efivars efivarfs rw,nosuid,nodev,noexec,relatime 0 0

cgroup /sys/fs/cgroup/devices cgroup rw,seclabel,nosuid,nodev,noexec,relatime,devices 0 0

cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,seclabel,nosuid,nodev,noexec,relatime,net_prio,net_cls 0 0

cgroup /sys/fs/cgroup/blkio cgroup rw,seclabel,nosuid,nodev,noexec,relatime,blkio 0 0

cgroup /sys/fs/cgroup/hugetlb cgroup rw,seclabel,nosuid,nodev,noexec,relatime,hugetlb 0 0

cgroup /sys/fs/cgroup/memory cgroup rw,seclabel,nosuid,nodev,noexec,relatime,memory 0 0

cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,seclabel,nosuid,nodev,noexec,relatime,cpuacct,cpu 0 0

cgroup /sys/fs/cgroup/pids cgroup rw,seclabel,nosuid,nodev,noexec,relatime,pids 0 0

cgroup /sys/fs/cgroup/cpuset cgroup rw,seclabel,nosuid,nodev,noexec,relatime,cpuset 0 0

cgroup /sys/fs/cgroup/perf_event cgroup rw,seclabel,nosuid,nodev,noexec,relatime,perf_event 0 0

cgroup /sys/fs/cgroup/freezer cgroup rw,seclabel,nosuid,nodev,noexec,relatime,freezer 0 0

configfs /sys/kernel/config configfs rw,relatime 0 0

/dev/mapper/rhel00-root / xfs rw,seclabel,relatime,attr2,inode64,noquota 0 0

selinuxfs /sys/fs/selinux selinuxfs rw,relatime 0 0

systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=33,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=16627 0 0

debugfs /sys/kernel/debug debugfs rw,relatime 0 0

mqueue /dev/mqueue mqueue rw,seclabel,relatime 0 0

hugetlbfs /dev/hugepages hugetlbfs rw,seclabel,relatime 0 0

/dev/sdd2 /boot xfs rw,seclabel,relatime,attr2,inode64,noquota 0 0

/dev/sdd1 /boot/efi vfat rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=winnt,errors=remount-ro 0 0

/dev/mapper/rhel00-home /home xfs rw,seclabel,relatime,attr2,inode64,noquota 0 0

sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0

tmpfs /run/user/42 tmpfs rw,seclabel,nosuid,nodev,relatime,size=9844752k,mode=700,uid=42,gid=42 0 0

binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0

tmpfs /run/user/0 tmpfs rw,seclabel,nosuid,nodev,relatime,size=9844752k,mode=700 0 0

/proc/partitions

This file contains partition block allocation information. A sampling of this file from a basic system looks similar to the following:

# cat /proc/partitions

major minor  #blocks  name

  8        0  586061784 sda

  8        1  586059776 sda1

  8      16  586061784 sdb

  8      17  586059776 sdb1

  8      32  586061784 sdc

  8      33    102400 sdc1

  8      37    4193280 sdc5

  8      38    4193280 sdc6

  8      39  125724672 sdc7

  8      40  451843015 sdc8

  8      48  586061784 sdd

  8      49    204800 sdd1

  8      50    1048576 sdd2

  8      51  584806400 sdd3

  8      64  586061784 sde

  8      65  586059776 sde1

  8      80  586061784 sdf

253        0  52428800 dm-0

253        1    4194304 dm-1

253        2  528179200 dm-2

Most of the information here is of little importance to the user, except for the following columns:

major — The major number of the device with this partition. The major number in the /proc/partitions, (3), corresponds with the block device ide0, in /proc/devices.

minor — The minor number of the device with this partition. This serves to separate the partitions into different physical devices and relates to the number at the end of the name of the partition.

#blocks — Lists the number of physical disk blocks contained in a particular partition.

name — The name of the partition.

/proc/scsi

The primary file in this directory is /proc/scsi/scsi, which contains a list of every recognized SCSI device. From this listing, the type of device, as well as the model name, vendor, SCSI channel and ID data is available.

/proc/swaps

This file measures swap space and its utilization. For a system with only one swap partition, the output of /proc/swaps may look similar to the following

[root@localhost scsi]# free -k

              total        used        free      shared  buff/cache  available

Mem:      98447508    1872008    92336912      19912    4238588    95778248

Swap:      4194300          0    4194300

[root@localhost scsi]# cat /proc/swaps

Filename                                Type            Size    Used    Priority

/dev/dm-1                              partition      4194300 0      -1

While some of this information can be found in other files in the /proc/ directory, /proc/swap provides a snapshot of every swap file name, the type of swap space, the total size, and the amount of space in use (in kilobytes). The priority column is useful when multiple swap files are in use. The lower the priority, the more likely the swap file is to be used.

/proc/sys

比较特殊的一个目录,不仅可以查看一些系统的信息,还可以立即启用、禁用一些内核特性。

[root@localhost scsi]# ls -l /proc/sys

total 0

dr-xr-xr-x. 1 root root 0 Aug 26 16:27 abi

dr-xr-xr-x. 1 root root 0 Aug 26 16:27 crypto

dr-xr-xr-x. 1 root root 0 Aug 26 16:27 debug

dr-xr-xr-x. 1 root root 0 Aug 26 16:27 dev

dr-xr-xr-x. 1 root root 0 Aug 26 16:27 fs

dr-xr-xr-x. 1 root root 0 Aug 26 16:27 kernel

dr-xr-xr-x. 1 root root 0 Aug 26 16:27 net

dr-xr-xr-x. 1 root root 0 Aug 26 16:27 sunrpc

dr-xr-xr-x. 1 root root 0 Aug 26 16:27 user

dr-xr-xr-x. 1 root root 0 Aug 26 16:27 vm

/proc/uptime

This file contains information detailing how long the system has been on since its last restart. The output of /proc/uptime is quite minimal:

[root@localhost ~]# cat /proc/uptime

942390.01 60310373.54

[root@localhost ~]# uptime

14:14:39 up 10 days, 21:47,  2 users,  load average: 0.00, 0.01, 0.05

The first value represents the total number of seconds the system has been up. The second value is the sum of how much time each core has spent idle, in seconds. Consequently, the second value may be greater than the overall system uptime on systems with multiple cores.

/proc/version

This file specifies the version of the Linux kernel, the version of gcc used to compile the kernel, and the time of kernel compilation. It also contains the kernel compiler's user name (in parentheses).

[root@localhost ~]# cat /proc/version

Linux version 3.10.0-862.el7.x86_64 (mockbuild@x86-034.build.eng.bos.redhat.com) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-28) (GCC) ) #1 SMP Wed Mar 21 18:14:51 EDT 2018

SCSI/SATA Devices

[root@9-112 ~]# cat /proc/scsi/scsi

Attached devices:

Host: scsi0 Channel: 00 Id: 00 Lun: 00    注意host/channel/id/lun.

  Vendor: LENOVO  Model: AL15SEB060N      Rev: TB54

  Type:  Direct-Access                    ANSI  SCSI revision: 06

Host: scsi0 Channel: 00 Id: 01 Lun: 00

  Vendor: LENOVO  Model: AL15SEB060N      Rev: TB54

  Type:  Direct-Access                    ANSI  SCSI revision: 06

Host: scsi0 Channel: 00 Id: 02 Lun: 00

  Vendor: LENOVO  Model: AL15SEB060N      Rev: TB54

  Type:  Direct-Access                    ANSI  SCSI revision: 06

Host: scsi0 Channel: 00 Id: 03 Lun: 00

  Vendor: LENOVO  Model: AL15SEB060N      Rev: TB54

  Type:  Direct-Access                    ANSI  SCSI revision: 06

Host: scsi0 Channel: 00 Id: 04 Lun: 00

  Vendor: LENOVO  Model: AL15SEB060N      Rev: TB54

  Type:  Direct-Access                    ANSI  SCSI revision: 06

Host: scsi0 Channel: 00 Id: 05 Lun: 00

  Vendor: LENOVO  Model: AL15SEB060N      Rev: TB54

  Type:  Direct-Access                    ANSI  SCSI revision: 06

Host: scsi0 Channel: 00 Id: 06 Lun: 00

  Vendor: BROADCOM Model: VirtualSES      Rev: 03 

  Type:  Enclosure                        ANSI  SCSI revision: 07

[root@9-112 ~]# lsscsi

[0:0:0:0]    disk    LENOVO  AL15SEB060N      TB54  /dev/sda

[0:0:1:0]    disk    LENOVO  AL15SEB060N      TB54  /dev/sdb

[0:0:2:0]    disk    LENOVO  AL15SEB060N      TB54  /dev/sdc

[0:0:3:0]    disk    LENOVO  AL15SEB060N      TB54  /dev/sdd

[0:0:4:0]    disk    LENOVO  AL15SEB060N      TB54  /dev/sde

[0:0:5:0]    disk    LENOVO  AL15SEB060N      TB54  /dev/sdf

[0:0:6:0]    enclosu BROADCOM VirtualSES      03    -       

[root@9-112 ~]#

与[转帖]/proc文件简介相似的内容:

[转帖]/proc文件简介

https://www.jianshu.com/p/2610241770be 简介 /proc文件系统是一个伪文件系统,它只存在内存当中,而不占用外存空间。它以文件系统的方式为访问系统内核数据的操作提供接口。用户和应用程序可以通过 proc得到系统的信息,并可以改变内核的某些参数。由于系统的信息,如

[转帖]Linux命令拾遗-软件资源观测

原创:打码日记(微信公众号ID:codelogs),欢迎分享,转载请保留出处。 简介# 这是Linux命令拾遗系列的第三篇,本篇主要介绍Linux中观测软件资源的命令,如ps、netstat、lsof,以及查看进程信息的宝库/proc目录。 本系列文章索引Linux命令拾遗-入门篇Linux命令拾遗

[转帖]内存proc详解

https://www.cnblogs.com/zhengchunyuan/p/9358416.html Linux系统上的/proc目录是一种文件系统,即proc文件系统。与其它常见的文件系统不同的是,/proc是一种伪文件系统(也即虚拟文件系统),存储的是当前内核运行状态的一系列特殊文件,用户可

[转帖]【性能】中断绑定和查看|irqbalance 中断负载均衡|CPU瓶颈

常用命令 ```# 查看当前运行情况service irqbalance status # 终止服务service irqbalance stop 取消开机启动: chkconfig irqbalance off # irqbalance -h``` ```/proc/interrupts 文件中可

[转帖]Linux服务器上监控网络带宽的18个常用命令

https://www.pianshen.com/article/57221534801/ nload等一些工具可以读取"proc/net/dev"文件,以获得流量统计信息;而一些工具使用pcap库来捕获所有数据包,然后计算总数据量,从而估计流量负载。下面是按功能划分的命令名称。监控总体带宽使用――

[转帖]dmidecode详解

跟硬件相关的命令有uname, lspci,/proc目录下的文件等,有时候Linux/Unix系统下获取有关硬件方面的信息,这个时候,就要用到dmidecode, 使用该命令可以查询BIOS、系统、主板、处理器、内存、缓存等非常重要信息。下面是其常用的几个命令: # dmidecode | gre

[转帖]linux 内核参数优化

Sysctl命令及linux内核参数调整 一、Sysctl命令用来配置与显示在/proc/sys目录中的内核参数.如果想使参数长期保存,可以通过编辑/etc/sysctl.conf文件来实现。 命令格式: sysctl [-n] [-e] -w variable=value sysctl [-n]

[转帖]fs-max、file-nr和nofile的关系

https://www.cnblogs.com/ermazi/p/7843632.html 1. file-max /proc/sys/fs/file-max:这个文件决定了系统级别所有进程可以打开的文件描述符的数量限制,如果内核中遇到VFS: file-max limit rea

[转帖]fs-max、file-nr和nofile的关系

fs-max、file-nr和nofile的关系 1. file-max /proc/sys/fs/file-max: 这个文件决定了系统级别所有进程可以打开的文件描述符的数量限制,如果内核中遇到VFS: file-max limit reached的信息,那么就提高这个值。 设置

[转帖]fs-max、file-nr和nofile的关系

fs-max、file-nr和nofile的关系 1. file-max /proc/sys/fs/file-max: 这个文件决定了系统级别所有进程可以打开的文件描述符的数量限制,如果内核中遇到VFS: file-max limit reached的信息,那么就提高这个值。 设置