Should a process be penalized for generating excessive page faults?

What will be an ideal response?

This is an interesting question, since sometimes excessive page faults are generated
because the operating system is doing a poor job maintaining a process’s working set in primary
memory. How do you define “excessive page faults” when there is no known standard
for page fault rates? This is something that depends on the characteristics of each individual
process.A high page fault rate could indicate a process that is poorly written, so it does not
exhibit locality. Or, it could be a well-written process that of necessity has a wildly dispersed
reference pattern natural to the application itself.A process could simply be transitioning
between major phases, in which case it will temporarily have a high page fault rate as it pages
in its new working set—this is perfectly normal behavior that the system should not penalize.

Computer Science & Information Technology

You might also like to view...

What is one limit to two users working on the same database file at the same time?

What will be an ideal response?

Computer Science & Information Technology

BIOSUEFI settings are stored on the system or primary hard drive

Indicate whether the statement is true or false

Computer Science & Information Technology