A search engine is a web server that responds to client requests to search in its stored indexes and (concurrently) runs several web crawler tasks to build and update the indexes. What are the requirements for synchronization between these concurrent activities?
What will be an ideal response?
The crawler tasks could build partial indexes to new pages incrementally, then merge them with the active index (including deleting invalid references). This merging operation could be done on an off-line copy. Finally, the environment for processing client requests is changed to access the new index. The latter might need some concurrency control, but in principle it is just a change to one reference to the index which should be atomic.
You might also like to view...
Referential integrity should be enforced in a database with related tables because it
A) helps ensure data is automatically updated. B) helps ensure invalid data is not entered into a table. C) makes the database easier to back up. D) makes the database easier to repair.
To perform a reverse sort on query results from a SELECT statement, add the ___ keyword after the name of the field by which you want to perform the sort.
A. REVERSE B. DESC C. A-Z D. Z-A