As a leader in patent analysis, Landon IP has expert searchers who answer complex questions on a daily basis to produce the highest quality results. If you work with patents, you know the obstacles we face: data collections are huge and unwieldy; errors in the data are rampant; and in short, nothing involving patent data is ever easy. Today, I’m going to share some of our most basic strategies for producing high-quality datasets that lead to reliable results. Read on to get an insight into these best practices.
Unless you live under a rock, you’ve probably heard of the America Invents Act. The Senate passed its version of the bill in March, and the House of Representatives passed its own version on June 23, 2011 with a 301-117 vote. According to a CNN article by David Goldman titled “Patent reform is finally on its way,” key issues addressed in both bills include “a transition of the U.S. patent law from a ‘first to invent’ to a ‘first to file’ system,” “provisions that attempt to keep patent battles out of the courts,” and provisions to “allow the U.S. Patent and Trade Office to set and potentially keep its own fees.” Over the past week, my Google newsfeed has been inundated with a flurry of editorials and blog posts either labeling the bill as a travesty or a godsend. Am I going to add to this growing mountain of diverse opinion pieces? No.
Instead, I’ll show you how to search for these editorials in a useful subscription-based search system for locating business intelligence and non-patent literature. Read on to view a case study of searching for patent reform commentary in Factiva!
I’m going to take a break today from visiting obscure search systems (and writing long 2-part posts) to share with you a delightful patent application that I hold very close to my heart. I usually don’t spend my spare time reading the image file wrappers of US patent applications in PAIR, but I will openly admit that I spent a solid two hours one Saturday morning reading the entire file for US Application No. 11/161,345.
After the jump, discover the secrets of 11/161,345, a.k.a “Godly Powers!”
Every once in a while I am doing a project and come across the need to perform what I assume will be a simple task in my patent search software, and then find out that it just can’t be done.
I wanted to start highlighting and cataloging these little quirks, and I don’t mean to pick on any particular system. There have got to be dozens of instances of this kind of thing in most products, as designers can’t anticipate every possible way searchers will want to use these systems – but when a big workflow impediment comes up, it benefits us to document that and let our vendors know.
Update: This post has been edited to reflect that PatBase is jointly operated by Minesoft and RWS Group.
Experienced patent searchers know that searching for patent databases by company name is hard – and I mean really hard. A company which owns a patent is called the patent “assignee” in the US. Take a look at our assignee best practices wiki article over on the main Intellogist site to get an overview of some of the obstacles that can trip you up during this kind of search.
One thing that makes patent owner searching so difficult is simply that patents change hands, and when they do, the information published on the patent face is no longer correct. Another difficulty is that these types of transactions are not always on record at the USPTO. However, the USPTO does keep a US patent assignment database of all the transactions that they *have* been notified about. And fortunately, patent search vendors can update their electronic databases with the new assignment information. (by the way, as far as I know, US reassignment data is the only reassignment data that gets collected and added into commercial patent search products on a regular basis.)
Here is a quick summary of what some major commercial providers do with US reassignment data:
Filed under: Case Study, Patent Search Systems | Tagged: assignee, assignments, EPO, INPADOC, INPADOC Legal Status, LexisNexis, orbit.com, patbase, patent search vendors, QPAT, Questel, Qweb, thomson innovation, Thomson Reuters, TotalPatent | 1 Comment »
Patent assignee data is a goldmine for conducting competitive intelligence. Want to know who the big players in a technology sector are? Follow the patent breadcrumb trail of patent assignee data. Researching and presenting company data is an important facet of patent analysis projects and may help interested parties understand where licensing opportunities exist. Today we’ll show you how to use different analyzation methods to find relevant companies in our chosen technology area.
Last time, in Learn Practical Patent Analysis: A Case Study (Installment 3), we discussed how classification analysis can inform a patent analysis project by narrowing the focus of the analyst and eliminating extraneous results. We ended up with a set of 1786 patent families focused on solar panels in the Mechanical Engineering specific IPC classes.
Patent classifications are the original patent searching tools and remain among the most useful ways to categorize and index related prior art. Classifications, by definition, are groups of patent documents of similar subject matter. Several major patent classification systems exist, including US, IPC, ECLA, Japanese F-Index and F-Term, and DEKLA. In previous posts, we have discussed how patent searching by classification works and why it’s worthwhile. To check out those posts, see parts one and two of the “Classified Information” series.
Patent classifications can be vitally important in conducting a patent analysis project, and today we’ll show you how to utilize classification systems to pare down your pool of patent documents under review in addition to potentially finding new documents of interest.
Last time, in Learn Practical Patent Analysis: A Case Study (Installment 2), we discussed how manual de-duplication can reduce the redundant patent documents returned to us in our first full text search query. Barring that, we discussed how patent families could be used as a de-duplication tool to limit certain closely related patent applications from overwhelming the project: removing family duplicates in the data set is desirable because it reduces the amount of time needed to perform both manual and statistical analysis processes, and it focuses the analysis on the inventive concepts rather than the patent documents. To keep this case study concise and to the point I’ve chosen an automatic family de-duplication process to represent each patent family as one object. The analysis that follows is based on this method, but different patent analysis studies require different methods, a point not lost on our commenters last week.
Patent data is almost universally messy. To the uninitiated it can be complicated and unclear which patents are related, let alone important. When dealing with large numbers of patent documents, it can be helpful to know how to thin the heard and focus in on what is most important to you or your client. In this post I’ll teach you how to clean the data by removing family duplicates in preparation to determine the most relevant classifications when dealing with a patent analysis project.
You may have heard that the US Patent and Trademark Office recently issued two classification orders that affect US classes 210, Liquid Purification or Separation, and 707, Data Processing: Database File Management or Data Structures. (Michael White over at the Patent Librarian’s Notebook recently had a write-up of the event with some interesting statistics.) However, searchers beware: although the classification systems may have been re-designed, the re-classification effort may not yet be completed for these classes. Further complicating matters, your patent search vendor may not have yet received or loaded the new class designations for the affected patents that have been reclassified.
Today, based upon some suggestions from readers, I wanted to introduce a case study that I hope to discuss over several blog posts. What’s more, my hope is to get you, the reader, involved in the process by providing my method for you to independently audit. The goal is to use the case study to see what conclusions we can come up with using a variety of search tools. I would also hope readers will post questions they would like answered, based upon the described method.