Notes
main
main
  • Introduction
  • linuxKernel
    • tips
    • make_help
    • old linux
      • compile_linux0.11
      • TestEnvironment
      • load_setup
      • get_hard_data
    • list
    • plist
    • fifo
    • idr
    • xarray
    • rbtree
    • maple_tree
    • syscall
    • bitmap
    • page
    • page_flags
    • page_size
    • page mapcount
    • page refcount
    • folio
    • slub
      • proc_slabinfo
      • slub_theory
      • kmalloc_kfree
      • kmem_cache
      • slab_alloc
      • slab_free
      • proc_meminfo_SReclaimable_SReclaimable
    • vmalloc
    • brk
    • mmap
    • mremap
    • mprotect
    • madvise
    • read
    • write
    • shmem
    • huge_page
    • page_fault
    • rmap
    • lru
    • multi-gen-LRU
    • page_reclaim
    • page_cache
    • page_table
    • rcu
    • kvm
    • aarch64_boot
    • tracing_system
    • cache_coherence_and_memory_consistency
    • cpu_speculates
    • mmap_lock
    • per-vma_lock
    • cgroup
    • symbol
    • campact
    • page_ext
    • mempool
    • kernelstack
    • filesystem
    • io_stack
    • workingset
    • ioremap
    • sched_period
  • linuxDebug
    • openocd_openjtag
    • i2c_tools
    • objdump
    • addr2line
    • gdb_useage
    • debug_linux_kernel_via_gdb
    • debug_linux_module_via_gdb
    • early_boot
    • sequentially_execute
    • dynamic_debug
    • research_linuxKernel_by_patch
    • tracefs
    • ebpf
    • bpftrace
    • perf
    • flame_graph
    • crash
    • ASAN_HWASAN_MTE_check_mem_bug
    • page_owner
    • vmtouch
    • fio
    • benchmark
  • linuxSystem
    • common
      • system_version
      • procfs
      • proc_sys_vm
      • cmd_ps
      • makefile
      • file_descriptor
      • psi
      • ulimit
      • top
      • delay_accounting
    • ubuntu
      • custom_kernel
      • get_cmd_src
      • record_ssh_info
      • log
      • run_custom_script
      • repo
      • cockpit
      • nfs
      • tftp
      • misc
    • fedora
      • system_upgrade
      • custom_kernel
      • lvextend
      • yt-dlp
      • jellyfin
  • linuxDriver
    • i2c_peripherals_driver
    • spi_peripherals_driver
    • gpio_subsystem
    • IRQ_driver
    • blockIO_unblockIO_async
    • linux_own_driver
    • misc_device
    • input_device
    • timer
    • atomic_spinlock_semaphore_mutex
    • lcd
    • touch_screen
    • debugfs
    • v4l2
    • mmap
  • hardware
    • paging_mmu_pt
    • iommu
  • process_thread_scheduler
    • scheduler01
    • scheduler02
    • scheduler03
    • scheduler04
    • scheduler05
    • scheduler06
  • memory_management
    • mm1
    • mm2
    • mm3
    • mm4
    • mm5
  • input_output_filesystem
    • io_fs_01
    • io_fs_02
    • io_fs_03
    • io_fs_04
  • lock_and_lockup_detector
    • general_lock
    • hung_task
    • softLockup_hardLockup
    • crash_experiment
  • MIT_6.S081
    • 6.S081_Operating_System_Engineering
    • Schedule.md
    • Class
      • Overview
      • Administrivia
    • Labs
      • Tools
      • Guidance
      • startup
      • syscall
      • page_table
      • Calling_Convention
      • traps
    • xv6
      • xv6
    • References.md
  • qemu
    • qemu_buildroot
    • qemu_busybox.md
    • Serial.md
    • demo_mini2440
      • 0_compilation_error_summary
      • 1_compilation_steps
      • 2_operation_mode
      • 3_transplant_tools_libraries
      • 4_tools_use
      • reference_website
  • tools
    • getKernelSourceCodeList
    • nat
    • shell
    • translating
    • YouCompleteMe
    • cscope
    • global
    • vscode
    • vim
    • binary
    • markdown
    • draw
    • git
    • tig
    • tmux
    • mail_client
    • download_patchset_from_LKML
    • minicom
    • clash
  • other
    • interview
    • interview_c_base
    • know_dontknow
    • Stop-Ask-Questions-The-Stupid-Ways
    • How-To-Ask-Questions-The-Smart-Way
    • docker
    • buildroot
    • rv32_to_rv64
Powered by GitBook
On this page
  • 一个块IO的一生:从page cache到bio到request
  • IO调度算法
  • cgroup与IO

Was this helpful?

  1. input_output_filesystem

io_fs_04

一个块IO的一生:从page cache到bio到request

  task_struct->file->inode->i_mapping->address_space        文件系统   每个进程plug队列->IO调度电梯elevator内部队列->dispatch队列
app --------------------------------------------> page cache -----> bio ---------------------------------> request --> 块设备驱动

但是 不是所有进程读/写硬盘都经过page cache,如下:

  • 常规app进行读写硬盘时,需要经过page cache缓冲,某一时刻再读/写硬盘

  • O_SYNC app进行读写硬盘时,需要经过page cache缓冲,然后立刻读/写硬盘

  • O_DIRECT app进行读写硬盘时,不需要经过page cache,直接读/写硬盘

IO调度算法

IO调度算法有三种:

  • noop : 最简单的调度器,把邻近bio进行合并处理

  • deadline : 保证读优先级的前提下,写不会饿死

  • cfq : 考虑进程

查询目前是用哪一种IO调度算法?

$ cat /sys/block/sda/queue/scheduler
noop deadline [cfq]

设置IO调度算法与IO nice值

$ echo cfq >  /sys/block/sda/queue/scheduler
$ ionice -c 2 -n 0 dd if=/dev/sda of=/dev/null &
$ ionice -c 2 -n 7 dd if=/dev/sda of=/dev/null &

$ iotop

cgroup与IO

  • cgroup v1的weight throttle

$ cd /sys/fs/cgroup/blkio/
$ mkdir A B

$ cgexec -g blkio:A dd if=/dev/sda of=/dev/null & ## 将dd命令放在A cgroup运行
$ cgexec -g blkio:B dd if=/dev/sda of=/dev/null & ## 将dd命令放在B cgroup运行
$ echo 50 > B/blkio.weight                        ## 设置B cgroup的权重等于50
$ iotop

$ ls -l /dev/sda
brw-rw---- 1 root disk 8, 0 11月 18 21:32 /dev/sda

$ cgexec -g blkio:A dd if=/dev/sda of=/dev/null &
$ echo "8:0 1048576" > A/blkio.throttle.read_bps_device ## 限制A cgroup的最大读数据为1M/s
$ iotop

$ cgexec -g blkio:A dd if=/dev/zero of=/mnt/a oflag=direct bs=1M count=300 & ## 注意:oflag=direct
$ echo "8:0 1048576" > A/blkio.throttle.write_bps_device ## 限制A cgroup的最大写数据为1M/s
$ iotop
  • cgroup v2的writeback throttle

在cgroup v1,blkio cgroup write 只能用于DIRECT_IO的场景(writeback线程和write线程不是同一个),这使得write变成system wide,而不是group wide.

在cgroup v2,打通了 memory group 和 blkio group,能知道每个group的dirty情况

Previousio_fs_03Nextlock_and_lockup_detector

Last updated 4 years ago

Was this helpful?