fork()父子进程运行先后顺序
本文意在解决如下2个问题:
fork函数创建子进程后,到底哪个先运行?linux系统为什么要这么设计?
目录
Fork函数父子进程运行先后顺序调研... 1
1 背景... 5
2 名词解释... 5
3 调研结论... 6
4 Fork原理... 6
4.1.1 Fork() -> copy_process() 8
5 2.6.9-2.6.23:子进程优先运行... 10
5.1 让子进程优先执行的优点... 12
6 2.6.23-2.6.32:子进程优先运行... 14
6.1 新版wake_up_new_task. 14
6.2 wake_up_new_task-> activate_task() 15
6.3 wake_up_new_task-> task_new () 15
6.4 sysctl_sched_child_runs_first 19
7 2.6.32至今:父进程优先运行... 20
8 结论... 23
9 附件及参考资料... 23
1 背景
Linux fork()函数可以创建新的几乎一模一样的进程。
#include <unistd.h>
pid_t fork(void);
fork(2)生成一个新的进程,该进程是以当前进程的上下文为依据生成的子进程;fork返回0表示当前处于子进程中,而在父进程中则返回所生成的子进程的pid,失败时返回-1;
今天跟同事在群里聊到fork后父子进程谁先运行的问题,当时没有得出详细的答案,因此回来仔细了解一下。
linux邮件组里面也出现过一些关于运行先后问题的讨论,比如这里:http://www.spinics.net/lists/linux-newbie/msg00678.html :
I had read that the operating systems that use copy-on-write mechanism for fork(), it is better if they deliberately allow the CHILD to run first. This would be better because in 99% of the cases child will call exec() and the new address space will be allocated. Instead if the parent is executes first, an unnecessary copy of the pages is made (if parents writes) and later on when child executes, a fresh address space is executed.
So in linux, is a child run first or the parent? Can we rely on this information?
TIA
2 名词解释
l COW
copy-on-write,不详述。
l 完全公平调度程序(Completely Fair Scheduler,CFS)
Linux2.6.23以后引入的进程调度机制,新特性包括模块化调度程序,完全公平调度程序(Completely Fair Scheduler,CFS),CFS 组调度等。具体信息可以参考:http://www.ibm.com/developerworks/linux/library/l-cfs/。
中文版在这里:http://www.ibm.com/developerworks/cn/linux/l-cfs/。
关于CFS这里有linux从2.6.8到2.6.9升级时的changelog说明:
3 调研结论
- fork后父子进程先后执行关系跟不同linux系统版本有关。
- 先后顺序并非完全随机不可预料。
- Kernel 2.6.9 到2.6.23(不包括)子进程优先运行;
- Kernel 2.6.23到2.6.32(不包括)时子进程优先,kernel.sched_child_runs_first变量为1;
- Kernel2.6.32以上版本截止到写此文时父进程优先运行,kernel.sched_child_runs_first变量为0。
当然前提是在没有设置CLONE_VM标志的前提下。以上从2.6.23后就可以通过/proc/sys/kernel/sched_child_runs_first修改默认行为,具体增加的版本在这里:http://kerneltrap.org/node/8059 。
多说一句就是CLONE_VM曾经是不会care这个标志的,至少在2.6.8及以前是这样的,后来由于openMP库测试性能下降后才修复,具体的修复log见这里:http://www.kernel.org/pub/linux/kernel/v2.6/ChangeLog-2.6.9。下面复述一下主要commit原因:
Don't do child-runs-first for CLONE_VM processes, as there is obviously no COW benifit to be had. This is a big one, it enables Andi's workload to run well without clone balancing, because the OpenMP child threads can get balanced off to other nodes *before* they start running and allocating memory.
4 Fork原理
首先再啰嗦一下介绍fork原理。直接看代码(点击右侧箭头展开代码):
long do_fork(unsigned long clone_flags,
unsigned long stack_start,
struct pt_regs *regs,
unsigned long stack_size,
int __user *parent_tidptr,
int __user *child_tidptr)
{
struct task_struct *p;
int trace = 0;
long pid = alloc_pidmap();//分配一个新的pid进程号
·······
//copy_process完全了几乎大部分工作,准备了进程结构等等。
p = copy_process(clone_flags, stack_start, regs, stack_size, parent_tidptr, child_tidptr, pid);
/*
* Do this prior waking up the new thread - the thread pointer
* might get invalid after that point, if the thread exits quickly.
*/
if (!IS_ERR(p)) {
struct completion vfork;
if (clone_flags & CLONE_VFORK) {//如果设置了CLONE_VFORK标志,表示使用vfork创建,因此需要子进程先运行,直到子进程调用exec函数后父进程才会被唤醒。
p->vfork_done = &vfork;
init_completion(&vfork);//将当前进程挂入等待队列
}
if ((p->ptrace & PT_PTRACED) || (clone_flags & CLONE_STOPPED)) {
/*
* We'll start up with an immediate SIGSTOP.
*/
sigaddset(&p->pending.signal, SIGSTOP);
set_tsk_thread_flag(p, TIF_SIGPENDING);//如果设置CLONE_STOPPED标志,则将当前进程设置为TIF_SIGPENDING状态,挂起之。
}
//如果没有设置CLONE_STOPPED则调用wake_up_new_task(唤醒新进程。具体谁先谁后运行在里面可看出来,这个不同版本不一样,后续详细介绍。
if (!(clone_flags & CLONE_STOPPED))
wake_up_new_task(p, clone_flags);
else
p->state = TASK_STOPPED;
if (clone_flags & CLONE_VFORK) {
wait_for_completion(&vfork);
if (unlikely (current->ptrace & PT_TRACE_VFORK_DONE))
ptrace_notify ((PTRACE_EVENT_VFORK_DONE << 8) | SIGTRAP);
}
} else {
free_pidmap(pid);
pid = PTR_ERR(p);
}
return pid;
}
copy_process实现如下。
4.1 Fork() -> copy_process()
copy_process完成了fork的绝大部分工作。下面请看代码注释:
/*
* This creates a new process as a copy of the old one,
* but does not actually start it yet.
* It copies the registers, and all the appropriate
* parts of the process environment (as per the clone
* flags). The actual kick-off is left to the caller.
*/
static task_t *copy_process(unsigned long clone_flags,
unsigned long stack_start,
struct pt_regs *regs,
unsigned long stack_size,
int __user *parent_tidptr,
int __user *child_tidptr,
int pid)
{
int retval;
struct task_struct *p = NULL;
·····
p = dup_task_struct(current);
copy_flags(clone_flags, p);
p->pid = pid;//记录当前子进程的pid。
if ((retval = copy_files(clone_flags, p)))
goto bad_fork_cleanup_semundo;
if ((retval = copy_fs(clone_flags, p)))
goto bad_fork_cleanup_files;
if ((retval = copy_signal(clone_flags, p)))
goto bad_fork_cleanup_sighand;
if ((retval = copy_mm(clone_flags, p)))
goto bad_fork_cleanup_signal;
retval = copy_thread(0, clone_flags, stack_start, stack_size, p, regs);
if (retval)
goto bad_fork_cleanup_namespace;
/* Perform scheduler related setup */
sched_fork(p);//设置调度用的数据,但此时并未唤醒之。
/*
* Ok, make it visible to the rest of the system.
* We dont wake it up yet.
*/
p->real_parent = current;
p->parent = p->real_parent;
nr_threads++;
total_forks++;
retval = 0;
·····
}
5 2.6.9-2.6.23:子进程优先运行
fork之后是父进程还是子进程先被运行一般是不可知的。
对于Linux,为避免父进程先执行引起不必要的COW(因为很多时候子进程将很快执行exec),生成新进程时曾试图使子进程先于父进程执行(参考《Understanding Linux Kernel》的“3.4.1.1. The do_fork( ) function”一节)。
- If the CLONE_STOPPED flag is not set, it invokes the wake_up_new_task( )function, which performs the following operations:
- Adjusts the scheduling parameters of both the parent and the child (see "The Scheduling Algorithm" in Chapter 7).
- If the child will run on the same CPU as the parent,[*] and parent and child do not share the same set of page tables (CLONE_VM flag cleared), it then forces the child to run before the parent by inserting it into the parent's runqueue right before the parent. This simple step yields better performance if the child flushes its address space and executes a new program right after the forking. If we let the parent run first, the Copy On Write mechanism would give rise to a series of unnecessary page duplications.
[*] The parent process might be moved on to another CPU while the kernel forks the new process.
- Otherwise, if the child will not be run on the same CPU as the parent, or if parent and child share the same set of page tables (CLONE_VM flag set), it inserts the child in the last position of the parent's runqueue.
观察Linux 2.6.9-23进程创建的相关源码,在版本2.6.23(不包括)之前我们可以看到曾经存在这样一行注释(位于kernel/sched.c的wake_up_new_task函数中):
从上面可以看出,创建新进程后,决定谁先运行在后面的wake_up_new_task函数中。
Fork() -> wake_up_new_task():
/*
* wake_up_new_task - wake up a newly created task for the first time.
* This function will do some initial scheduler statistics housekeeping
* that must be done for every newly created context, then puts the task
* on the runqueue and wakes it.
*/
void fastcall wake_up_new_task(task_t * p, unsigned long clone_flags)
{
unsigned long flags;
int this_cpu, cpu;
runqueue_t *rq, *this_rq;
rq = task_rq_lock(p, &flags);//锁住当前运行队列,避免互斥。
cpu = task_cpu(p);//得到新任务运行的cpu。
this_cpu = smp_processor_id();//根据SMP获取当前父进程所运行的CPU ID。
······
if (likely(cpu == this_cpu)) {//如果是当前CPU
if (!(clone_flags & CLONE_VM)) {
//没有设置拷贝虚拟内存的标志,则我们可以认为后续fork之后子进程会马上运行exec等执行其他程序,因此为了不必要的去拷贝虚拟内存等操作,此处优先让子进程运行
/*
* The VM isn't cloned, so we're in a good position to
* do child-runs-first in anticipation of an exec. This
* usually avoids a lot of COW overhead.
*/
if (unlikely(!current->array))
__activate_task(p, rq);
else {
p->prio = current->prio;
list_add_tail(&p->run_list, ¤t->run_list);
//注意此处,将子进程加入到当前进程所在的运行队列点的头部,也就是说加入到当前进程的前面,即子进程排在父进程前面!因此也就说明子进程会有更多机会优先运行。当然这是无法绝对保证的。
p->array = current->array;
p->array->nr_active++;
rq->nr_running++;
}
set_need_resched();//触发重新调度,因此子进程有更多机会优先执行。
} else
/* Run child last */
__activate_task(p, rq);//否则的话直接加入到运行队列的尾部。这样父进程有更多机会优先执行。
/*
* We skip the following code due to cpu == this_cpu
*
* task_rq_unlock(rq, &flags);
* this_rq = task_rq_lock(current, &flags);
*/
this_rq = rq;
} else {
this_rq = cpu_rq(this_cpu);
······
__activate_task(p, rq);
······
task_rq_unlock(this_rq, &flags);//解锁
}
5.1 让子进程优先执行的优点
关于让谁先执行在linux邮件组里面有过非常多的讨论,关于是否应该让子进程开始运行,我暂时找到的最早的出处在下面的邮件里面:
From: "Adam J. Richter"
To: torvalds@transmeta.com, linux-kernel@vger.kernel.org
Subject: PATCH(?): linux-2.4.4-pre2: fork should run child first
Date: Thu, 12 Apr 2001 01:55:16 -0700
具体邮件链接为:http://lwn.net/2001/0419/a/children-first.php3 。下面引述其主要观点:
most of the time,the child process from a fork will do just a few things and then doan exec(), releasing its copy-on-write references to the parent'spages.
Linux-2.4.3's fork() does not run the child first.
I have attached the patch below. I have also adjusted the
comment describing the code.
也就是作者尝试提议将子进程先运行,理由是为了优化写时复制机制,因为fork后的子进程基本都是会执行exec重新加载其他程序的,因此没必要进行虚拟内存的过多操作。比如如果父进程先运行,进行了写操作,则需进行写时复制。
5.1.1.1 其他
- 这里有个让子进程优先运行的低版本补丁:
http://blog.csdn.net/dog250/article/details/5302865。
- 这里可以看到2.6.9的wake_up_new_task 代码:
http://lxr.linux.no/linux-bk+v2.6.9/kernel/sched.c#L1342
6 2.6.23-2.6.32:子进程优先运行
自从2.6.23版本以后,linux进程调度进行了很大的改变,之前的优先级队列等都换成了完全公平调度程序(Completely Fair Scheduler,CFS)。完全公平调度力图确保每个进程都获得公平的CPU份额。关于CFS不是这里的重点,具体内容可以参考:
英文版:http://www.ibm.com/developerworks/linux/library/l-cfs/
中文版:http://www.ibm.com/developerworks/cn/linux/l-cfs/
http://doc.opensuse.org/documentation/html/openSUSE/opensuse-tuning/cha.tuning.taskscheduler.html#sec.tuning.taskscheduler.cfs
这里是linux2.6.23版本新引入CFS时的介绍:http://kernelnewbies.org/Linux_2_6_23#head-f3a847a5aace97932f838027c93121321a6499e7 从这里可以大概了解CFS的基本原理。
6.1 新版wake_up_new_task
此处已2.6.23版本的内核代码进行分析,wake_up_new_task函数见这里:http://lxr.linux.no/linux+v2.6.24/kernel/sched.c#L1752。
下面请看代码:
/*
* wake_up_new_task - wake up a newly created task for the first time.
*
* This function will do some initial scheduler statistics housekeeping
* that must be done for every newly created context, then puts the task
* on the runqueue and wakes it.
*/
void fastcall wake_up_new_task(struct task_struct *p, unsigned long clone_flags)
{
unsigned long flags;
struct rq *rq;
rq = task_rq_lock(p, &flags);
BUG_ON(p->state != TASK_RUNNING);
update_rq_clock(rq);
p->prio = effective_prio(p);
if (!p->sched_class->task_new || !current->se.on_rq) {
activate_task(rq, p, 0);//直接调用activate_task
} else {
/*
* Let the scheduling class do new task startup
* management (if any):
*/
//否则使用公平调度类fair_sched_class
// task_new是调度类设置的钩子函数,从这里可以看出CFS的模块化,我们可以自定义调度程序。如果设置了sysctl_sched_child_runs_first ,为1,表示子进程先运行,那么else命中,否则走入if,运行activate_task函数,见后面说明。
p->sched_class->task_new(rq, p);
inc_nr_running(p, rq);
}
check_preempt_curr(rq, p);
task_rq_unlock(rq, &flags);
}
6.2 wake_up_new_task-> activate_task()
当上述wark_up_new_task运行到activate_task(rq, p, 0);时,是出现如下几种情况之一:
- 没有设置调度类钩子函数;
- 不在当前进程,即父进程的调度实体红黑树里面
在上面几种情形下,不需要优先调度子进程运行,因此调用activate_task,代码如下:
http://lxr.linux.no/linux+v2.6.23/kernel/sched.c#L944
static void activate_task(struct rq *rq, struct task_struct *p, int wakeup)
{
if (p->state == TASK_UNINTERRUPTIBLE)
rq->nr_uninterruptible--;
enqueue_task(rq, p, wakeup);//将进程放到调度类的红黑树里面
inc_nr_running(p, rq);
}
6.3 wake_up_new_task-> task_new ()
task_new是一个调度类的钩子函数,对于idle类型的进程而言,task_new为空,对于CFS而言,其在linux+v2.6.23/kernel/sched_fair.c被设置为task_new_fair,代码见:http://lxr.linux.no/linux+v2.6.23/kernel/sched_fair.c#L1232。
task1218struct sched_class fair_sched_class __read_mostly = {
.enqueue_task = enqueue_task_fair,
.dequeue_task = dequeue_task_fair,
.yield_task = yield_task_fair,
.check_preempt_curr = check_preempt_curr_fair,
.pick_next_task = pick_next_task_fair,
.put_prev_task = put_prev_task_fair,
.load_balance = load_balance_fair,
.set_curr_task = set_curr_task_fair,
.task_tick = task_tick_fair,
.task_new = task_new_fair,//新建进程时调用的钩子函数。
};_new_fair
Task_new_fair函数见http://lxr.linux.no/linux+v2.6.24/kernel/sched_fair.c#L1058。下面请看代码:
/*
* Share the fairness runtime between parent and child, thus the
* total amount of pressure for CPU stays equal - new tasks
* get a chance to run but frequent forkers are not allowed to
* monopolize the CPU. Note: the parent runqueue is locked,
* the child is not running yet.
*/
static void task_new_fair(struct rq *rq, struct task_struct *p)
{
struct cfs_rq *cfs_rq = task_cfs_rq(p);
struct sched_entity *se = &p->se, *curr = cfs_rq->curr;
int this_cpu = smp_processor_id();
sched_info_queued(p);
update_curr(cfs_rq);//更新当前红黑树
place_entity(cfs_rq, se, 1);//设置当前新进程的调度实体的vruntime值
/* 'curr' will be NULL if the child belongs to a different group */
if (sysctl_sched_child_runs_first && this_cpu == task_cpu(p) &&
curr && curr->vruntime < se->vruntime) {
/*
* Upon rescheduling, sched_class::put_prev_task() will place
* 'current' within the tree based on its new key value.
*/
swap(curr->vruntime, se->vruntime);
}
enqueue_task_fair(rq, p, 0);//将子进程放入红黑树,vruntime决定了再红黑树中的位置。
resched_task(rq->curr);
}
上面的swap(curr->vruntime, se->vruntime);很重要,其条件为:
- sysctl_sched_child_runs_first为1,表示要子进程先运行;
- 新进程的CPU等于当前CPU,也就是父进程的CPU,否则没有必要设置;
- 组相同;
- 父进程也就是当前进程的vruntime小于新的子进程的vruntime。
Swap语句交换父子进程的vruntime值,为什么呢?因为vruntime值决定了一个进程在调度红黑树上的位置,vruntime值越小就越靠近红黑树的左边,因此更优先得到调度并运行。再此我们知道了,如果设置了sysctl_sched_child_runs_first为1的话,其他条件满足就会更加倾向于让子进程优先被调度运行。
后面enqueue_task_fair(rq, p, 0);语句将子进程放入红黑树,其决定了哪个进程先运行,以及子进程被放入红黑树的位置,调用树为:
http://lxr.linux.no/linux+v2.6.24/kernel/sched_fair.c
- Do_fork()
- ->wake_up_new_task() 唤醒子进程
- ->task_new_fair() CFS创建新进程时的钩子函数
- ->enqueue_task_fair() 将进程放入红黑树
- ->enqueue_entity()-> 更新当前进程的统计数据,调用__enqueue_entity
- –> __enqueue_entity() 根据vruntime大小插入红黑树适当节点
下面看一下__enqueue_entity函数的代码,从中可以看出加入红黑树的适当位置。
http://lxr.linux.no/linux+v2.6.24/kernel/sched_fair.c#L143
/*
* Enqueue an entity into the rb-tree:
*/
static void __enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se)
{
s64 key = entity_key(cfs_rq, se);
//key 其实就是vruntime的值跟红黑树的最小vruntime值之差:se->vruntime - cfs_rq->min_vruntime;
/*
* Find the right place in the rbtree:
*/
while (*link) {
parent = *link;
entry = rb_entry(parent, struct sched_entity, run_node);
/*
* We dont care about collisions. Nodes with
* the same key stay together.
*/
if (key < entity_key(cfs_rq, entry)) {
link = &parent->rb_left;//新进程key小于当前节点,左移
} else {
link = &parent->rb_right;
//否则的话右移。因此,如果2个进程vruntime值相等,那么新进程将作为旧进程的右子孙,因此旧进程优先得到调度运行。这里有个问题不明白:如果父子进程vruntime相等,那么子进程靠右,得到调度的优先级低,如果设置了sysctl_sched_child_runs_first等,但是task_new_fair里面的curr->vruntime < se->vruntime比较不成立,因此就不会swap,这样的话子进程不会优先运行,也就跟sysctl_sched_child_runs_first设置的预期值1表现不一致,不知道是不是一个bug,知道的同学告诉我一下。
leftmost = 0;
}
}
/*
* Maintain a cache of leftmost tree entries (it is frequently
* used):
*/
if (leftmost)//如果是最小的key,则记录最小值
cfs_rq->rb_leftmost = &se->run_node;
rb_link_node(&se->run_node, parent, link);//链入找到的节点
rb_insert_color(&se->run_node, &cfs_rq->tasks_timeline);//调整红黑树,使其保持平衡。
}
6.4 sysctl_sched_child_runs_first
具体原理咱们看完了,从上面的代码可以看出,谁先运行的决定因素有一下几个:
- 1. 父子进程是否在同一个CPU运行;
- 2. 父子进程是否共享内存,设置了CLONE_VM标志;
- 3. sysctl_sched_child_runs_first是否为1。
对于我们所讨论的一般情况,决定因素在sysctl_sched_child_runs_first值,那么,这个值系统默认为多少?怎么修改?
答案在:linux+v2.6.23/kernel/sched.c#L1663 ,下面的代码:
/*
* After fork, child runs first. (default) If set to 0 then
* parent will (try to) run first.
*/
const_debug unsigned int sysctl_sched_child_runs_first = 1;
可以看出,对于2.6.23到2.6.32以下版本,sysctl_sched_child_runs_first默认设置为1,也就是子进程先优先运行。注意这段代码23版本在sched.c里面,24版本后移到了linux+v2.6.24/kernel/sched_fair.c#L48。Linux代码变动之快有时候真感觉TMD不负责任,基本隔半年不看最新代码8成不知道里面是什么东西了。
7 2.6.32至今:父进程优先运行
当到2.6.32版本的时候时间已经到了2009年,多核计算机已经很普遍了,换句话说,时代在变化,我们也需要变化。
其他方面跟2.6.23-32版本基本一样,只是一个变量变化了。恩,对,是sysctl_sched_child_runs_first。这回默认sysctl_sched_child_runs_first=0 了! 也就是父进程优先运行。
请看代码http://lxr.linux.no/linux+v2.6.32/kernel/sched_fair.c#L50 。
/*
* After fork, child runs first. If set to 0 (default) then
* parent will (try to) run first.
*/
unsigned int sysctl_sched_child_runs_first __read_mostly;
因此得到结论2.6.32至今是父进程优先运行。不过2.6.32以后有没有又变化俺就不能人肉啦^.^ 。
好奇的读者不禁要问了,为什么又要设置父进程优先运行呢?原因何在?
嗯,查邮件记录是最好的方法,于是找了下2009年的邮件,关于2.6.32版本的,最终找到此:http://kerneltrap.org/mailarchive/linux-kernel/2009/9/11/4486240。
Ingo发了封邮件关于调度内核2.6.32版本的更新,下面引用一下邮件主要内容:
Previous message: [thread] [date] [author]
Next message: [thread] [date] [author]
From: Ingo Molnar
Subject: [GIT PULL] sched/core for v2.6.32
Date: Friday, September 11, 2009 - 12:25 pm
Linus,
Please pull the sched-core-for-linus git tree from:
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip.git sched-core-for-linus
Highlights:
- Child-runs-first is now off - i.e. we run parent first.
[ Warning: this might trigger races in user-space. ]
·····
Thanks,
Ingo
可以看出2009年9月份的2.6.32版本将sysctl_sched_child_runs_first置0,设置父进程优先运行了。于是有人在问:
详细内容见邮件地址:http://kerneltrap.org/mailarchive/linux-kernel/2009/9/11/4486338
From: Jesper Juhl
Subject: Re: [GIT PULL] sched/core for v2.6.32
Date: Friday, September 11, 2009 - 3:40 pm
[...]
Ouch. Do we dare do that?
于是linus师爷立即回复了之所以这么做的原因,其原始邮件见这里:
http://kerneltrap.org/mailarchive/linux-kernel/2009/9/11/4486341 。
linus在邮件里面说明了之所以要修改为父进程优先运行的原因,以及关于fork,vfork的区别说明。
From: Linus Torvalds
Subject: Re: [GIT PULL] sched/core for v2.6.32
Date: Friday, September 11, 2009 - 3:58 pm
We would want to at least try.
There are various reasons why we'd like to run the child first, ranging from just pure latency (quite often, the child is the one that is critical) to getting rid of page sharing for COW early thanks to execve etc.
But similarly, there are various reasons to run the parent first, like just the fact that we already have the state active in the TLB's and caches.
Finally, we've never made any guarantees, because the timeslice for the parent might be just about to end, so child-first vs parent-first is never a guarantee, it's always just a preference.[ And we _have_ had that preference expose user-level bugs. Long long ago we hit some problem with child-runs-first and 'bash' being unhappy about a really low-cost and quick child process exiting even _before_ bash itself had had time to fill in the process tables, and then when the SIGCHLD handler ran bash said "I got a SIGCHLD for something I don't even know about".
That was very much a bash bug, but it was a bash bug that forced us to vfork() has always run the child first, since the parent won't even be runnable. The parent will get stuck in
wait_for_completion(&vfork);
so the "child-runs-first" is just an issue for regular fork or clone, not It really hasn't been that way in Linux. We've done it both ways.
Linus
到此算是完全明白了,父子进程谁先谁后在2.6.32版本开始由于TLB的原因选择了父进程优先运行,当然正如linus所说,这只是一种preference,不是guarantee。
8 结论
从上面的分析中看出,其实父子进程谁先谁后的问题围绕在COW写时复制机制和TLB转换检测缓冲区/告诉缓存 以及哪个进程更重要之间进行。本文只是想详细了解一下这个话题的具体原因,不在为编程人员提供为程序依靠父进程先运行或子进程先运行的理论依据,因为我们永远不应该让自己的代码依靠哪个进程先运行,而应该采用更好的进程间通讯机制进行保证,或者使用vfork解决这个问题。下面引用一下linus祖师爷邮件里的话做个结尾:
Finally, we've never made any guarantees, because the timeslice for the parent might be just about to end, so child-first vs parent-first is never a guarantee, it's always just a preference.
9 附件及参考资料
http://www.ibm.com/developerworks/cn/linux/l-cfs/
http://kerneltrap.org/node/8059 。
- openSUSE System Analysis and Tuning Guide :
- Various scheduler-related topics: http://lwn.net/Articles/352863/
- Linux2.6.9 changeLog:
http://www.kernel.org/pub/linux/kernel/v2.6/ChangeLog-2.6.9 。
- 内核邮件记录:
http://kerneltrap.org/mailarchive/linux-kernel/2009/9/11/4486345/thread。
- Linux重要版本发布日志:
http://kernelnewbies.org/Linux_2_6_9 。
http://kernelnewbies.org/Linux_2_6_23 。
http://kernelnewbies.org/Linux_2_6_32 。
- Linux代码:http://www.kernel.org/pub/linux/kernel/v2.6/ 。
- 方便的linux代码查看地址:http://lxr.linux.no/linux+v2.6.32 。
跑步+器械锻炼这是最常规的健身房减肥方法。跑步是为了让脂肪燃烧起来,从根本上达到减肥的目的。一般来说,跑步时间控制在45-60分钟之间效果最佳。而器械锻炼是为了针对身体某一部位而减肥的,比如瘦大腿。高跟减肥舞蹈运动鞋
苦瓜粉是最经典的瘦身粉,一直都很热门。它的瘦身原理是因为含有减肥特效成分----高能清脂素。
大部分女性肥胖无非是吃多了或者是运动少引起的,对于这两种原因引起的肥胖,我们网站也给出了合理的减肥方案,爱美的妹子们不妨可以来学习下减肥食谱和运动感计划,让你轻松健康瘦下来。