This patent had ► Log In grant time compared to others in this category.
Patent grant time can be influenced by many factors. Activities within the USPTO that are beyond the control of patent attornies can influence grant time, but short grant times can also indicate well-written patents and dedicated efforts to respond rapidly to USPTO office actions with strong arguments. Shorter grant times are preferable, and the scores for this section are inverse measures — higher scores are better.
This patent has ► Log In claims compared to others in this category.
The number of claims in a patent is correlated with its strength. Because greater claim counts increase the cost of a patent, more claims can indicate the importance an applicant assigns to a patent. Importantly, some may elect to file claims across multiple patents. A higher score in this metric indicates more claims, relative to others in this category.
This patent has received ► Log In citations from other patents, than others in this category.
Citations from other patents are an important measure of the significance of a patent. More citations indicate that other technologies build on a patent. Higher scores in this metric are better, and indicate more citations from other patents.
This patent referenced ► Log In citations to other patents, than others in this category.
A lower number of citations to other patents can be a sign of diminished patent strength. More citations indicate dependence on more other technologies. Higher scores in this category are better, and indicate fewer citations to other patents.
This patent has ► Log In proximity to basic research compared to others in this category.
Proximity to basic research is measured by comparing the number of citations to non-patent literature among a cohort of patents. Because most non-patent citations are primary research papers, a higher count indicates greater proximity to basic research.
|6,732,235||Cache memory system and method for a digital signal processor|
|6,651,245||System and method for insertion of prefetch instructions by a compiler|
|6,578,131||Scaleable hash table for shared-memory multiprocessor system|
|6,549,995||Compressor system memory organization and method for low latency access to uncompressed memory regions|
|6,282,706||Cache optimization for programming loops|
|6,237,073||Method for providing virtual memory to physical memory page mapping in a computer operating system that randomly samples state information|
|6,047,363||Prefetching data using profile of cache misses from earlier code executions|
|5,742,804||Instruction prefetch mechanism utilizing a branch predict instruction|
|9,047,116||Context switch data prefetching in multithreaded computer|
|9,015,720||Efficient state transition among multiple programs on multi-threaded processors by executing cache priming program|
|8,996,724||Context switched route look up key engine|
|8,589,943||Multi-threaded processing with reduced context switching|
|8,341,352||Checkpointed tag prefetcher|
|8,141,098||Context switch data prefetching in multithreaded computer|
|8,099,515||Context switched route look up key engine|
|8,041,929||Techniques for hardware-assisted multi-threaded processing|
|8,010,966||Multi-threaded processing using path locks|
|7,873,816||Pre-loading context states by inactive hardware thread in advance of context switch|
|7,856,510||Context switched route lookup key engine|
|7,818,747||Cache-aware scheduling for a chip multithreading processor|
|7,739,478||Multiple address sequence cache pre-fetching|
|7,617,499||Context switch instruction prefetching in multithreaded computer|
|7,606,363||System and method for context switching of a cryptographic engine|
|7,493,621||Context switch data prefetching in multithreaded computer|
|7,383,402||Method and system for generating prefetch information for multi-block indirect memory access chains|
|7,383,401||Method and system for identifying multi-block indirect memory access chains|
|7,324,106||Translation of register-combiner state into shader microcode|
|7,103,724||Method and apparatus to generate cache data|