Table of Contents
|
linux
linux tips http://www.patoche.org/LTT/index.html
windows
General questions
SIGBUS vs SIGV
SIGBUS
Invalid address alignment
The program has attempted to read or write data that does not fit the CPU's memory-alignment rules.
Non-existent physical address
This is equivalent to a segmentation fault, but for a physical address rather than a virtual address.
Object-specific hardware error
This is far less common, but it is present in Solaris, when virtual memory pages have disappeared (e.g. accessing an mmaped file which has been truncated. [1])
http://en.wikipedia.org/wiki/SIGBUS
Thread vs Process
Each thread has : a private stack, Registers
All threads of a process share the code and heap. Objects to be shared across multiple threads should be allocated on the heap
Although the model is that each thread has a private stack, threads actually share the process address space
Each thread is described by a thread-control block (TCB)
A TCB typically contains:
Thread ID
Space for saving registers
Pointer to thread-specific data not on stack
User Level vs. Kernel Level Threads
- User level: use user-level thread package; totally
transparent to OS
– Light-weight
– If a thread blocks, all threads in the process block
- Kernel level: threads are scheduled by OS
– A thread blocking won’t affect other threads in the same
process
– Can take advantage of multi-processors
– Still requires context switch, but cheaper than process
context switching
big endia vs little endia
int num = 1; if(*(char *)&num == 1) { printf("\nLittle-Endian\n"); } else { printf("Big-Endian\n"); }
And here is some code to convert from one Endian to another.
int myreversefunc(int num) { int byte0, byte1, byte2, byte3; byte0 = (num & x000000FF) >> 0 ; byte1 = (num & x0000FF00) >> 8 ; byte2 = (num & x00FF0000) >> 16 ; byte3 = (num & xFF000000) >> 24 ; return((byte0 << 24) | (byte1 << 16) | (byte2 << 8) | (byte3 << 0)); }
http://vijayinterviewquestions.blogspot.com/2007/07/what-little-endian-and-big-endian-how.html
process address space
http://www.tenouk.com/ModuleW.html
call stack convention
a typical statck
Calling a __cdecl function
The best way to understand the stack organization is to see each step in calling a function with the __cdecl conventions. These steps are taken automatically by the compiler, and though not all of them are used in every case (sometimes no parameters, sometimes no local variables, sometimes no saved registers), but this shows the overall mechanism employed.
the sample program, figure[5]:
int function(int a, int b, int c) { char buffer[14]; int sum; sum = a + b + c; return sum; } void main() { int i; i = function(1,2,3); }
the compiled asemble code, figure[6]:
1 .file "example1.c" 2 .version "01.01" 3 gcc2_compiled.: 4 .text 5 .align 4 6 .globl function 7 .type function,@function 8 function: 9 pushl %ebp 10 movl %esp,%ebp 11 subl $20,%esp 12 movl 8(%ebp),%eax 13 addl 12(%ebp),%eax 14 movl 16(%ebp),%edx 15 addl %eax,%edx 16 movl %edx,-20(%ebp) 17 movl -20(%ebp),%eax 18 jmp .L1 19 .align 4 20 .L1: 21 leave 22 ret 23 .Lfe1: 24 .size function,.Lfe1-function 25 .align 4 26 .globl main 27 .type main,@function 28 main: 29 pushl %ebp 30 movl %esp,%ebp 31 subl $4,%esp 32 pushl $3 33 pushl $2 34 pushl $1 35 call function 36 addl $12,%esp 37 movl %eax,%eax 38 movl %eax,-4(%ebp) 39 .L2: 40 leave 41 ret 42 .Lfe2: 43 .size main,.Lfe2-main 44 .ident "GCC: (GNU) 2.7.2.3"
Push parameters onto the stack, from right to left
Parameters are pushed onto the stack, one at a time, from right to left. Whether the parameters are evaluated from right to left is a different matter, and in any case this is unspecified by the language and code should never rely on this. The calling code must keep track of how many bytes of parameters have been pushed onto the stack so it can clean it up later.
在图6中32至34行三个实参从右向左依次被压入堆栈。
Call the function
Here, the processor pushes contents of the %EIP (instruction pointer) onto the stack, and it points to the first byte after the CALL instruction. Afte this finishes, the caller has lost control, and the callee is in charge. This step does not change the %ebp register.
35行的call指令将call的下一条指令addl的地址,也就是function函数的返回地址压入堆栈。下面将控制转移到function.
Save and update the %ebp
Now that we're in the new function, we need a new local stack frame pointed to by %ebp, so this is done by saving the current %ebp (which belongs to the previous function's frame) and making it point to the top of the stack.
第9行,第10行的
pushl %ebp
movl %esp,%ebp // esp->ebp now our new stack frame base is ready,we need to setup the base pointer, its value= %esp
Once %ebp has been changed, it can now refer directly to the function's arguments as 8(%ebp), 12(%ebp). Note that 0(%ebp) is the old base pointer and 4(%ebp) is the old instruction pointer.
Allocate local variables
This function may choose to use local stack-based variables, and they are allocated here simply by decrementing the stack pointer by the amount of space required. This is always done in four-byte chunks.
Now, the local variables are located on the stack between the %ebp and %esp registers, and though it would be possible to refer to them as offsets from either one, by convention the %ebp register is used. This means that -4(%ebp) refers to the first local variable.
Save CPU registers used for temporaries // normally we don't have this step
[__cdecl stack frame] If this function will use any CPU registers, it has to save the old values first lest it walk on data used by the calling functions. Each register to be used is pushed onto the stack one at a time, and the compiler must remember what it did so it can unwind it later.
At this point, All the parameters and locals are offsets from the %ebp register:
- 16(%ebp) - third function parameter
- 12(%ebp) - second function parameter
- 8(%ebp) - first function parameter
- 4(%ebp) - old %EIP (the function's "return address")
- 0(%ebp) - old %EBP (previous function's base pointer)
- -4(%ebp) - first local variable
- -8(%ebp) - second local variable
- -12(%ebp) - third local variable
The function is free to use any of the registers that had been saved onto the stack upon entry, but it must not change the stack pointer or all Hell will break loose upon function return.
Restore the old base pointer
The first thing this function did upon entry was save the caller's %ebp base pointer, and by restoring it now (popping the top item from the stack), we effectively discard the entire local stack frame and put the caller's frame back in play.
图6中第21行的leave指令:
1)将堆栈帧指针ebp拷贝到esp中,于是在堆栈帧中为局部变量buffer[14]和sum分配的空间就被释放了;除
2)从堆栈中弹出一个机器字并将其存放到ebp中,这样ebp就被恢复为main函数的堆栈帧指针了。
Return from the function
This is the last step of the called function, and the RET instruction pops the old %EIP from the stack and jumps to that location. This gives control back to the calling function. Only the stack pointer and instruction pointers are modified by a subroutine return.
第22行的ret指令再次从堆栈中弹出一个机器字并将其存放到指令指针eip中,这样控制就返回到了第36行main函数中的addl指令处。
Clean up pushed parameters
In the __cdecl convention, the caller must clean up the parameters pushed onto the stack, and this is done either by popping the stack into don't-care registers (for a few parameters) or by adding the parameter-block size to the stack pointer directly.
第36行的addl指令将栈顶指针esp加上12,于是当初调用函数function之前压入堆栈的三个实参所占用的堆栈空间也被释放掉了。至此,函数function的堆栈帧就被完全销毁了。
detailed description of the above procedure
这里我们着重关心一下与函数function对应的堆栈帧形成和销毁的过程。从图5中可以看到,function是在main中被调用的,三个实参的值分别为1、2、3。由于C语言中函数传参遵循反向压栈顺序,所以在图6中32至34行三个实参从右向左依次被压入堆栈。接下来35行的call指令除了将控制转移到function之外,还要将call的下一条指令addl的地址,也就是function函数的返回地址压入堆栈。下面就进入function 函数了,首先在第9行将main函数的堆栈帧指针ebp保存在堆栈中并在第10行将当前的栈顶指针esp保存在堆栈帧指针ebp中,最后在第11行为 function函数的局部变量buffer[14]和sum在堆栈中分配空间。至此,函数function的堆栈帧就构建完成了,其结构如图7所示。
读者不妨回过头去与图4对比一下。这里有几点需要说明。首先,在Intel i386体系结构下,堆栈帧指针的角色是由ebp扮演的,而栈顶指针的角色是由esp扮演的。另外,函数function的局部变量buffer[14] 由14个字符组成,其大小按说应为14字节,但是在堆栈帧中却为其分配了16个字节。这是时间效率和空间效率之间的一种折衷,因为Intel i386是32位的处理器,其每次内存访问都必须是4字节对齐的,而高30位地址相同的4个字节就构成了一个机器字。因此,如果为了填补buffer [14]留下的两个字节而将sum分配在两个不同的机器字中,那么每次访问sum就需要两次内存操作,这显然是无法接受的。还有一点需要说明的是,正如我们在本文前言中所指出的,如果读者使用的是较高版本的gcc的话,您所看到的函数function对应的堆栈帧可能和图7所示有所不同。上面已经讲过,为函数function的局部变量buffer[14]和sum在堆栈中分配空间是通过在图6中第11行对esp进行减法操作完成的,而sub指令中的20 正是这里两个局部变量所需的存储空间大小。但是在较高版本的gcc中,sub指令中出现的数字可能不是20,而是一个更大的数字。应该说这与优化编译技术有关,在较高版本的gcc中为了有效运用目前流行的各种优化编译技术,通常需要在每个函数的堆栈帧中留出一定额外的空间。
下面我们再来看一下在函数function中是如何将a、b、c的和赋给sum的。前面已经提过,在函数中访问实参和局部变量时都是以堆栈帧指针为基址,再加上一个偏移,而Intel i386体系结构下的堆栈帧指针就是ebp,为了清楚起见,我们在图7中标出了堆栈帧中所有成分相对于堆栈帧指针ebp的偏移。这下图6中12至16的计算就一目了然了,8(%ebp)、12(%ebp)、16(%ebp)和-20(%ebp)分别是实参a、b、c和局部变量sum的地址,几个简单的 add指令和mov指令执行后sum中便是a、b、c三者之和了。另外,在gcc编译生成的汇编程序中函数的返回结果是通过eax传递的,因此在图6中第 17行将sum的值拷贝到eax中。
最后,我们再来看一下函数function执行完之后与其对应的堆栈帧是如何弹出堆栈的。图6中第21行的leave指令将堆栈帧指针ebp拷贝到esp中,于是在堆栈帧中为局部变量buffer[14]和sum分配的空间就被释放了;除此之外,leave指令还有一个功能,就是从堆栈中弹出一个机器字并将其存放到ebp中,这样ebp就被恢复为main函数的堆栈帧指针了。第 22行的ret指令再次从堆栈中弹出一个机器字并将其存放到指令指针eip中,这样控制就返回到了第36行main函数中的addl指令处。addl指令将栈顶指针esp加上12,于是当初调用函数function之前压入堆栈的三个实参所占用的堆栈空间也被释放掉了。至此,函数function的堆栈帧就被完全销毁了。前面刚刚提到过,在gcc编译生成的汇编程序中通过eax传递函数的返回结果,因此图6中第38行将函数function的返回结果保存在了main函数的局部变量i中。
ref:
http://www.ibm.com/developerworks/cn/linux/l-overflow/index.html
http://unixwiz.net/techtips/win32-callconv-asm.html
turnaround time vs response time
Turnaround time is the interval between the submission of a job and its completion.
Response time is the interval between submission of a request, and the first response to that request.
vfork
The vfork() system call can be used to create new processes without fully
copying the address space of the old process, which is horrendously inef-
ficient in a paged environment. It is useful when the purpose of fork(2)
would have been to create a new system context for an execve(2). The
vfork() system call differs from fork(2) in that the child borrows the
parent's memory and thread of control until a call to execve(2) or an
exit (either by a call to _exit(2) or abnormally). The parent process is
suspended while the child is using its resources.The vfork() system call returns 0 in the child's context and (later) the
pid of the child in the parent's context.The vfork() system call can normally be used just like fork(2). It does
not work, however, to return while running in the child's context from
the procedure that called vfork() since the eventual return from vfork()
would then return to a no longer existent stack frame. Be careful, also,
to call _exit(2) rather than exit(3) if you cannot execve(2), since
exit(3) will flush and close standard I/O channels, and thereby mess up
the parent processes standard I/O data structures. (Even with fork(2) it
is wrong to call exit(3) since buffered data would then be flushed
twice.)
zombie vs orphan process
A zombie process is not the same as an orphan process. a zombie process or defunct process is a process that has completed execution but still has an entry in the process table,
An orphan process is a process that is still executing, but whose parent has died. They don't become zombie processes; instead, they are adopted by init (process ID 1), which waits on its children.
Naturally, when people see a zombie process, the first thing they try to do is to kill the zombie, using kill or (horrors!) kill -9. This won't work, however: you can't kill a zombie, it's already dead.
When a process has already terminated ("died") by receiving a signal to do so, it can stick around for a bit to finish up a few last tasks. These include closing open files and shutting down any allocated resources (memory, swap space, that sort of thing). These "housekeeping" tasks are supposed to happen very quickly. Once they're completed, the final thing that a process has to do before dying is to report its exit status to its parent. This is generally where things go wrong.
Each process is assigned a unique Process ID (PID). Each process also has an associated parent process ID (PPID), which identifies the process that spawned it (or PPID of 1, meaning that the process has been inherited bythe init process, if the parent has already terminated). While the parent is still running, it can remember the PID's of all the children it has spawned. These PID's can not be re-used by other (new) processes until the parent knows that the child process is done.
When a child terminates and has completed its housekeeping tasks, it sends a one-byte status code to its parent. If this status code never gets sent, the PID is kept alive (in "zombie" status) in order to reserve its PID … the parent is waiting for the status code, and until it gets it, it doesn't want any new processes to try and reuse that PID number for themselves.
To get rid of a zombie, you can try killing its parent, which will temporarily orphan the zombie. The init process will inherent the zombie, and this might allow the process to finish terminating since the init process is always in a wait() state (ready to receive exit status reports of children).
Generally, though, zombies clean themselves up. Whatever the process was waiting for eventually occurs and the process can report its exit status to its parent and all is well.
If a zombie is already owned by init, though, and it's still sticking around (like zombies are wont to do), then the process is almost certainly stuck in a device driver close routine, and will likely remain that way forever. You can reboot to clear out the zombies, but fixing the device driver is the only permanent solution. Killing the parent (init in this case) is highly unrecommended, since init is an extremely important process to keeping your system running.
http://www.losurs.org/docs/zombies
A zombie process doesn't react to signals because it's not really a process at all- it's just what's left over after it died. What's supposed to happen is that its parent process was to issue a "wait()" to collect the information about its exit. If the parent doesn't (programming error or just bad programming), you get a zombie. The zombie will go away if its parent dies- it will be "adopted" by init which will do the wait()- so if you see one hanging about, check its parent; if it is init, it will be gone soon, if not the only recourse is to kill the parent..which you may or may not want to do. Finally, a process that is being traced (by a debugger, for example) won't react to the KILL either.
Linux boot process
1. Computer gets powered on, BIOS runs whatever it finds in the Master Boot Record (MBR), usually lilo
2. lilo, in turn, starts up the Linux kernel
3. The Linux kernel starts up the primal process, init. Since init is always started first, it always has a PID of 1.
4. init then runs your boot scripts, also known as "rc files". These are similar in concept to DOS's autoexec.bat and config.sys, if those had been developed to a fine art. These rc files, which are generally shell scripts, spawn all the processes that make up a running Unix system.
* Once the Linux kernel has been loaded by lilo, it looks in "all the usual places" for init and runs the first copy it finds
* In turn, init runs the shell script found at /etc/rc.d/rc.sysinit
* Next, rc.sysinit does a bunch of necessary things to make System V rc files possible
* init then runs all the scripts for the default runlevel
o It knows the default run level by examing /etc/inittab
o Symbolic links to the real scripts (in /etc/rc.d/init.d) are kept in each of the run level directories (/etc/rc.d/rc1.d through rc6.d)
* Lastly, init runs whatever it finds in /etc/rc.d/rc.local (regardless of run level). rc.local is rather special in that it is executed every time that you change run levels.
page faults
The Main functions of paging are performed when a program tries to access pages that do not currently reside in RAM, a situation causing page fault:
1. Handles the page fault, in a manner invisible to the causing program, and takes control.
2. Determines the location of the data in auxiliary storage.
3. Determines the page frame in RAM to use as a container for the data.
4. If a page currently residing in chosen frame has been modified since loading (if it is dirty), writes the page to auxiliary storage.
5. Loads the requested data into the available page.
6. Returns control to the program, transparently retrying the instruction that caused page fault.
*
In step 3, when a page has to be loaded and all existing pages in RAM are currently in use, one of the existing pages must be swapped with the requested new page. The paging system must determine the page to swap by choosing one that is least likely to be needed within a short time. There are various page replacement algorithms that try to answer such issue.
Most operating systems use the least recently used (LRU) page replacement algorithm. The theory behind LRU is that the least recently used page is the most likely one not to be needed shortly; when a new page is needed, the least recently used page is discarded. This algorithm is most often correct but not always: e.g. a sequential process moves forward through memory and never again accesses the most recently used page.
A Translation Lookaside Buffer (TLB) is a CPU cache that is used by memory management hardware to improve the speed of virtual address translation. All current desktop and server processors (such as x86) use a TLB. A TLB has a fixed number of slots containing page table entries, which map virtual addresses onto physical addresses. It is typically a content-addressable memory (CAM), in which the search key is the virtual address and the search result is a physical address. If the requested address is present in the TLB, the CAM search yields a match very quickly, after which the physical address can be used to access memory. If the requested address is not in the TLB, the translation proceeds using the page table, which is slower to access. Furthermore, the translation takes significantly longer if the translation tables are swapped out into secondary storage, which a few systems allow.
In computer storage, Belady's anomaly states that it is possible to have more page faults when increasing the number of page frames while using FIFO method of frame management. Laszlo Belady demonstrated this in 1969. Previously, it was believed that an increase in the number of page frames would always provide the same number or fewer page faults
When the page that was selected for replacement and paged out is referenced again it has to be paged in (read in from disk), and this involves waiting for I/O completion. This determines the quality of the page replacement algorithm: the less time waiting for page-ins, the better the algorithm. A page replacement algorithm looks at the limited information about accesses to the pages provided by hardware, and tries to guess which pages should be replaced to minimize the total number of page misses, while balancing this with the costs (primary storage and processor time) of the algorithm itself.
Replacement algorithms can be local or global.
When a process incurs a page fault, a local page replacement algorithm selects for replacement some page that belongs to that same process (or a group of processes sharing a memory partition). A global replacement algorithm is free to select any page in memory.
Local page replacement assumes some form of memory partitioning that determines how many pages are to be assigned to a given process or a group of processes. Most popular forms of partitioning are fixed partitioning and balanced set algorithms based on the working set model. The advantage of local page replacement is its scalability: each process can handle its page faults independently without contending for some shared global data structure.
Monitor vs Semaphore
A monitor is an approach to synchronize two or more computer tasks that use a shared resource, usually a hardware device or a set of variables. With monitor-based concurrency, the compiler or interpreter transparently inserts locking and unlocking code to appropriately designated procedures, instead of the programmer having to access concurrency primitives explicitly.
disk access
Disk delays are dominated by
three factors: seek time, rotational latency, and transfer time. During the seek time, an actuator
moves the disk heads to the disk cylinder being accessed. Rotational latency allows a specific disk
block to spin under the disk head, and transfer time allows data to be accessed as it passes under
the disk head.
redirect out and err both to a file
cmd >log.txt 2>&1
redirect out and error both to a file and console
cmd 2>&1 | tee log.txt
saves stdout and stderr to the files "out.txt" and "err.txt", respectively.
[root@server /root]# ./cmd 1>out.txt 2>err.txt
n>&m
File descriptor n is made to be a copy of the output file descriptor, save a copy of file descriptor m
n<&m
File descriptor n is made to be a copy of the input file descriptor, save a copy of file descriptor m
exec 3<&0 #save the current value of standard input
exec 0<$TMPFILE # open file and assign it to 0 . stdin
while read line
do
processLine $line
done
exec 0<&3 #restore value of file descriptor 0, stdin
n>&-
Close the output from file descriptor n
n<&-
Close the input from file descriptor n
&>file
Directs standard output and standard error to file
n<> file
Use file as both input and output for file descriptor n
echo 1234567890 > File # Write string to "File".
exec 3<> File # Open "File" and assign fd 3 to it.
read -n 4 <&3 # Read only 4 characters.
echo -n . >&3 # Write a decimal point there.
exec 3>&- # Close fd 3.
hard-link vs soft link
Soft and Hard Links
Soft links
Pointers to programs, files, or directories located elsewhere (just like Windows shortcuts)
If the original program, file, or directory is renamed, moved, or deleted, the soft link is broken.
If you type ls -F you can see which files are soft links because they end with @
To create a soft link called myfilelink.txt that points to a file called myfile.txt, use this: ln -s myfile.txt myfilelink.txt
Hard links
Pointers to programs and files, but NOT directories
If the original program or file is renamed, moved, or deleted, the hard link is NOT broken
Hard links cannot span disk drives, so you CANNOT have a hard link on /dev/hdb that refers to a program or file on /dev/hda
To create a hard link called myhardlink.txt that points to a file called myfile.txt, use this: ln myfile.txt myhardlink.txt
tips﹕在 Linux 的檔案系統上面的 link 有兩種﹕hard link 和 soft link﹐後者也叫 symbolic link。其兩者都是 link﹐那分別在哪裡呢﹖
所有檔案要存放在磁碟上面﹐都必須先獲得一個 i-node(這個我們後面會談到)﹐然後 inode 會告訴系統檔案存放在磁碟的哪個位置上面。如果用 hard link 的話﹐檔案會使用同一個inode﹐其指向的磁碟位置和原來的檔案位置一樣﹐而這個 inode 會更新其 link 指標( link count )﹐hard link越多﹐指標越多。這樣的情況之下﹐所有 hard link 都是平等的。如果要從磁碟上移除這個檔案所佔據的空間﹐那必須要將它所關聯到的所有hard link 都砍光(link count 為 0 )﹐才可以進行﹔換句話說﹕如果我為檔案 A 建立一個 hard link 為檔案B﹐然後我將 A 砍掉﹐但磁碟空間還是存在的﹐因為仍有 B 與之關聯﹐假如 B 是最後一個 hard link 了﹐這時將 B砍掉﹐那麼這個磁碟空間就會被釋放出來。我們可以將所有檔案都看成是 hard link ﹐不同的檔案 hard link到不同的磁碟空間去﹔但也允許不同的檔案 hard link 到同一個磁碟空間上面。
http://blog.donews.com/zhy2111314/archive/2005/02/20/282643.aspx
sticky bit
The most common use of the sticky bit today is on directories, where, when set, items inside the directory can be renamed or deleted only by the item's owner.
Frequently this is set on the /tmp directory to prevent ordinary users from deleting or moving other users' files.
The sticky bit can be set using the chmod command and can be set using its octal mode 1000 or by its symbol t (s is already used by the setuid bit). For example, to add the bit on the directory /usr/local/tmp, one would type chmod +t /usr/local/tmp. Or, to make sure that directory has standard tmp permissions, one could also type chmod 1777 /usr/local/tmp.
The set group SGID or user ID SUID bit (s) takes the permissions of whoever owns the file and gives them to the program asking for the file rather than the user. Say user Zelda has a program named Ick. In order to run, Ick needs access to some data in the file Belch. This file is owned by Zelda and has no world permissions. Another user, Bufford, wants to execute Ick. Bufford and Zelda don't belong to any of the same groups. Bufford can't run Ick because he doesn't have permission to access Belch. Zelda doesn't want to make Belch world readable. If the set user ID bit is turned on for world then Zelda's permissions as owner of Belch will be given to Belch itself. This means that Bufford can run Ick, because Ick will have access to the data in Belch that it needs. Bufford however, cannot read or write to Belch. He has access to it only through Ick.
When the sticky bit (t) is turned on for a directory users can have read and/or write permissions for that directory, but they can only remove or rename files that they own. The sticky bit on a file tells the operating system that the file will be executed frequently. Files like this are kept in swap space even when they aren't being executed. Although this takes up swap space it greatly reduces the time it takes to execute the program. Programs such as vi have the sticky bit turned on by default.
X is used to reverse the the status of the execute bit. The same effect can be achieved by simply removing the execute bit. The advantage of X is that it can be used to change the status of a file without knowing what the current status is. This can be useful for such things as editing a file that contains a shell script, where X can prevent anyone from executing the script while it is being edited.
http://www.uwsg.iu.edu/usail/tasks/fileper/fileper.html
well explained here.
set SUID and SGID numeric values (4 and 2 respectively.) 6xxx or chmod u+s,g+s drop_box
set stick bit : 1xxx or chmod +t public
http://www.dba-oracle.com/linux/sticky_bit.htm
man
For instance, the passwd command has a man page in section 1 and another in section 5. By default, the man page with the lowest number is shown. If you want to see another section than the default, specify it after the man command:
man 5 passwd
If you want to see all man pages about a command, one after the other, use the -a to man:
man -a passwd
apropos list all man pages contain a keyword
umask
With the umask command you can specify a mask that the system uses to set access permissions when a file is created. In order to understand umask you need to know that access permission at file creation is application-dependent. Each command or application sets a file permission in its open command.[28] The system then "subtracts" any user-defined mask, resulting in the final access permission for the file. You can set a umask by this command:
% umask [ooo]
where ooo stands for three octal digits. The user-specified "mask", ooo, has the same positional structure as described above for chmod, but specifies permissions that should be removed (disallowed).
For example, a mask of 022 removes no permissions from owner, and removes write permission from group and others. Thus a file normally created with 777 would become 755 (this would appear as rwxr-xr-x in the format put out by the command ls -l). The following command could be put in your .cshrc or .profile.
% umask 022
The meaning of permissions applied to directories is described in Section 6.6.2.
6.6.2 Directory Permissions
[Missing image]See section 7.6 for AFS systems.
You can grant or deny permission for directories as well as files, and protection assigned to a directory file takes precedence over the permissions of individual files in the directory.
* Read permission for a directory allows you to read the names of the files contained in that directory with the ls command, but not to use them.
* Write permission for a directory allows you to create files in that directory or to delete any file in the directory, regardless of the file protection on the files themselves. It does not allow you to see the files or use them without r and x directory permission. In other words, write permission to a directory allows you to alter the contents of the directory itself, but not to alter, except to remove, files in the directory (which is controlled by the file's permissions).
* Execute permission allows you to list the contents of the directory.
syslogd and klogd are the system loggers. klogd handles kernel logging, but is often bundled in with syslogd and configured with it. The loggers themselves are useless for after the fact debugging — but can be configured to log more data for the next crash.
Comment on this articleAre there other open-source tools you use to find out what caused a system crash? Please share your experiences with other diagnostic tools.
Post your comments
Use /etc/syslogd.conf to determine where the system log files are, and to see where the kernel log files are, if /proc/kmsg doesn't exist.
CTRL-C interrupt a job, CTRL-Z suspend a job
If you start a long-running task and forget to add the ampersand, you can still swap that task into the background. Instead of pressing ctrl-C (to terminate the foreground task) and then restarting it in the background, just press ctrl-Z after the command starts, type bg, and press enter. You'll get your prompt back and be able to continue with other work. Use the fg command to bring a background task to the foreground.
You might wonder why you'd ever want to swap programs between the foreground and background, but this is quite useful if for example you're doing a long-running compile and you need to issue a quick command at the shell prompt. While the compilation is running, you could press ctrl-Z and then enter the bg command to put the compiler in the background. Then do your thing at the shell prompt and enter the fg command to return the compiler task to the foreground. The ctrl-Z trick also works with the Emacs text editor and the Pine email program. You can suspend either program and then return t o your work in progress with the fg command.
what happens when you type 'ls'
http://linuxgazette.net/111/ramankutty.html
The following is taken from http://www.tldp.org/HOWTO/Unix-and-Internet-Fundamentals-HOWTO/running-programs.html
The shell is just a user process, and not a particularly special one. It waits on your keystrokes, listening (through the kernel) to the keyboard I/O port. As the kernel sees them, it echoes them to your screen. When the kernel sees an ‘Enter’ it passes your line of text to the shell. The shell tries to interpret those keystrokes as commands.
Let's say you type ‘ls’ and Enter to invoke the Unix directory lister. The shell applies its built-in rules to figure out that you want to run the executable command in the file /bin/ls. It makes a system call asking the kernel to start /bin/ls as a new child process and give it access to the screen and keyboard through the kernel. Then the shell goes to sleep, waiting for ls to finish.
When /bin/ls is done, it tells the kernel it's finished by issuing an exit system call. The kernel then wakes up the shell and tells it it can continue running. The shell issues another prompt and waits for another line of input.
Other things may be going on while your ‘ls’ is executing, however (we'll have to suppose that you're listing a very long directory). You might switch to another virtual console, log in there, and start a game of Quake, for example. Or, suppose you're hooked up to the Internet. Your machine might be sending or receiving mail while /bin/ls runs.
develop kernel in C
A lot good tutorials
re-entry issue
re-entry issue
Linux device driver
sample device driver
In particular, you should not call printf() from within an ISR without careful consideration. If stdout is mapped to a device driver that uses interrupts for proper operation, the printf() call can deadlock the system waiting for an interrupt that never occurs because interrupts are disabled. You can use printf() from within ISRs safely, but only if the device driver does not use nterrupts…. http://forum.niosforum.com/forum/lofiversion/index.php/t502.html