Foresight Institute Logo
Image of nano

Global nanotech review by U.S. Office of Naval Research

Ronald Kostoff of the Office of Naval Research brings our attention to some new ONR publications, listed on this page: “Two recent reports (1, 2) contain a text mining survey and analysis of the global nanotechnology literature, and should be of use to nanotechnology research performers, managers, planners, sponsors, evaluators, vendors, and implementers/ users…

“Some highlights of these documents include:
* The Far Eastern countries have expanded nanotechnology publication output dramatically in the past decade.
* The Peoples Republic of China ranks second to the USA (2004 results) in nanotechnology papers published in the Science Citation Index (SCI), and has increased its nanotechnology publication output by a factor of 21 in a decade.
* Of the six most prolific (publications) nanotechnology countries, the three from the Western group (USA, Germany, France) have about eight percent more nanotechnology publications (for 2004) than the three from the Far Eastern group (China, Japan, South Korea). 
* While most of the high nanotechnology publication-producing countries are also high nanotechnology patent producers in the US Patent Office (as of 2003), China is a major exception.  China ranks 20th as a nanotechnology patent-producing country in the US Patent Office. 
* China has minor representation in the most highly cited nanotechnology documents”

China’s relative lack of patents *may* just show that they aren’t spending their time on paperwork and their funds on attorney fees. Their minor representation in the most highly cited documents could possibly be due to Western disinterest. Or perhaps these are real indicators that China is farther behind than many are claiming. Even in that case, however, it would be unwise to assume this will continue.—Christine

6 Responses to “Global nanotech review by U.S. Office of Naval Research”

  1. John Novak Says:

    The debate about China’s success or lack thereof in nanotechnology and nanoscience, it seems to me, hangs on the question of metrics. Those claiming that China is the upcoming tiger of nanotechnology usually point to number of publications or number of patents or number of science and engineering graduates as the metric.

    Critics, on the other hand, argue that quantity is not the same as quality, and so these statistics could be misleading.

    It occurs to me that a better metric here would not just be “number of publications” but “number of influential publications.” As a first cut at quantifying “influential” in this context, I claim we should be using citations as an indicator of merit and influence– something along the lines of “number of papers on the most often cited list.”

    WIth that in mind, I haven’t read the whole report (becuase it’s 72 pages long and I’m enjoying my vacation) but I did search on “citation” to see what shook loose. Here are the highlights of that exercise:

    One finding is a list of Most Cited First Authors. There is a list of twenty, and the breakdown is:

    United States: 8
    China: 5
    Japan: 4
    Australia: 1
    Netherlands: 1
    France: 1

    There is also a list of the ten most cited articles. The breakdown there is:

    United States: 7
    Japan: 1
    Netherlands: 1
    Switzerland: 1

    There is a list of the most cited journals. They aren’t broken down by nation, but the text of the article takes pains to point out that none of the journals are Chinese publications. (It’s not clear if the majority are United States publications, but that’s the implication.)

    There is a further section of “interpretations” (5.7, to be exact) which raises some obvious questions based on the methodology which produced the results above, such as the composition of the several tens of thousands of articles used as the basis for the study, whether it’s English-language biased, etc. (It’s my opinion that this section bends over backwords to raise questions that, again, make China look scary, rather than drawing the obvious conclusion that Chinese research in this area is producing quantity, but not quality. Your mileage may differ.)

    An issue *not* raised, unfortunately, was the notion of national self-selection. It would be fascinating to me to see if we could tell if there was a single, global body of research (indicated by many cross-citations) or if there was a Chinese research pool, a Western research pool, and a Japanese research pool (for instance) which might be indicated by a lack of cross-citations.

  2. Christine Peterson Says:

    Regarding the notion of national self-selection: Does anyone know what percentage of Chinese journals are in English? How about Japan’s? Is that covered in the ONR work, which I haven’t waded through yet?—Christine

  3. Richard Jones Says:

    On the question of whether there is a distinct Chinese research pool published in Chinese journals, my distinct impression is that there isn’t. I visited China a couple of years ago, and from the limited sample of labs I visited and people I saw, it seemed very clear that the aspirations of Chinese scientists were exactly the same as the aspirations of scientists in the West – to publish in the international journals with the highest impact (Nature, Science, Physical Review Letters, Applied Physics Letters, Advanced Materials etc.). It’s true that journals do have an original national home, but they have become highly internationalised – Physical Review Letters, for example, which is the most prestigious US-based physics journal, went through the landmark of having more than 50% of its papers from outside the USA a few years ago, while Nature, nominally based in the UK, has flourishing branch offices in the US and the far east, and only a minority of what it publishes comes from Europe, let alone the UK. As to who cites whom, it’s obvious to everyone that US scientists disproportionately cite other US scientists, not for any particularly sinister reasons but simply because everyone tends to cite the work that one has some personal connection with, from knowing the people or having heard the scientists talk at a conference (this of course is why European scientists like me, who still want to make some impact in the USA, spend so much time on transatlantic aircraft). Despite these distortions, I agree that looking at citations is the best measure we have of originality, impact and importance. It’s good that the ONR study did take a look at this, but it still suffers from the unduly restrictive, arbitrary and conservative definitions of nanotechnology that I discussed on Soft Machines in the context of the Lux report.

    All the best for the holiday to our cousins from the USA!

  4. Christine Peterson Says:

    Thanks for that useful info on China, Richard. Now how about Japan?—Christine

  5. John Novak Says:


    As to who cites whom, it’s obvious to everyone that US scientists disproportionately cite other US scientists, not for any particularly sinister reasons but simply because everyone tends to cite the work that one has some personal connection with, from knowing the people or having heard the scientists talk at a conference (this of course is why European scientists like me, who still want to make some impact in the USA, spend so much time on transatlantic aircraft).

    I did not mean to imply any sinister reasoning– not on the part of the individual scientists, anyway. You list one good reason for a potential segregation of research efforts, namely, the collaboration and familiatity issues. Another good reason would be simple language issues. And there’s also the potential that the governments in question acting cynically, both with regards to publication and information import. In the United States, neither of those things are particularly an issue. In China, I think we all know about the “Great Wall of China” philosophy being applied to the Internet; I admit freely that I have absolutely no idea how that would be applied (or how successfully or to what effect) to the Chinese research community. It is, after all, 2005, not 1955.

    I raise the governmental issue mostly for completeness; I’d expect the geographic concerns to be the larger issues.



    I have read the four comments on our two nanotechnology reports, and would like to respond to some of the comments.

    1. The Overview, introducing the documents

    Five highlights were selected from the cover letter and presented in the overview. The remaining highlights from the cover letter place the results in larger context, and are as follows:

    “Finally, to place these nanotechnology results in a larger perspective, an ongoing energetic materials (explosives, propellants) study shows China to be second to the USA in numbers of energetic materials SCI articles published for the first half of 2005, and even more competitive with the USA when energetic materials articles from the more applied Engineering Compendex are compared. In 1998, China had one-fifth the number of SCI research articles as the then second place holder, Russia, and by 2004, China was tied with Russia.

    Thus, in these two critical technologies studied in detail, nanotechnology and energetic materials, China is second to the USA in absolute numbers of SCI publications, despite the fact that China’s aggregate SCI paper production is less than 25 percent that of the USA. An almost-completed text mining study of China’s overall S&T provides other examples of China’s global technology competitiveness today, and shows specific technology sub-areas (including nanotechnology sub-areas) in which China is actually leading the USA in articles published. As the above studies have shown, aggregate country publication productivity results can be somewhat misleading. Publications in critical technologies and sub-technologies are most important, and should serve as the basis for publication comparison.”

    Additionally, the overview addresses relative lack of China’s patents. This conclusion was for the USPTO database only.

    Finally, the overview also addresses China’s minor representation in the most highly cited documents. The reasons for this minor representation are not clear, and require further study. Documents may not be represented highly in this top tier because: 1) they are of poor quality, and/ or 2) they are not widely available, and/ or 3) they are very applied (on average, the more fundamental documents are cited more highly, although exceptions exist), and/ or 4) they are not focused on ‘hot’ or trendy topics that are being addressed by large numbers of researchers, or 5) a variety of other reasons. There were documents written by Chinese authors that received respectable numbers of citations, but not all that many made it to the highest tier. I would strongly recommend a follow-on study where experts read a sample of Chinese-authored documents and compare them with USA or other advanced technology countries for relative quality. That would go a long way in helping to resolve this oft-mentioned issue of relative quality.

    2. John Novak’s comments

    John focuses on the quantity/ quality issue, and references the section in the report that addresses the topic of China’s minor representation in the top citation tier. It would be useful to display this section in its entirety, so the reader can judge for him/ herself how we treated this issue.

    “5.7. Interpretations and Context Analysis

    For nanotechnology in particular, further analyses are required to interpret the meaning of the country and document citation results. In the country output results, the USA was listed as first, and China was listed as second. However, this analysis is based solely on the SCI results. Some analysts believe that there is an English language bias in the SCI [Winkmann et al, 2002]. An examination of Chinese language journals not in the SCI would have to be conducted for nanotechnology content, compared to similar US journals not in the SCI, and the results combined with the SCI results to get a more comprehensive picture of the relative country outputs.

    In the paper citation results, only a very few of the most cited papers have Chinese authors (~two percent). What is more important than the actual numbers is the interpretation of the numbers.

    Is the low representation of highly cited papers by Chinese authors due to poor quality? Is the low representation of highly cited papers due to unawareness of the rest of the nanotechnology community of Chinese-authored research? Is the low representation of highly cited papers due to the content being more applied? Previous studies by the first author on this topic have shown that the more fundamental papers, and the more fundamental journals, tend to receive higher citations.

    If the latter (more applied content) is a significant factor in reduced citations, what are its implications? Perhaps the dynamic is that developed countries like the USA, Japan, and Germany are producing the fundamental research advances in nanotechnology, and China is exploiting these advances to produce products of defense and commercial importance.

    The research sponsoring community has long assumed a classical model for the respective roles of industry and government in the research enterprise. Government would fund the high-risk potentially high-payoff research that industry would be unwilling to fund, and industry would fund the more developed technology when some of the front-end risk had been removed. However, what if China has decided, at least in its present stage of development, to operate in the industrial mode? Its front end very fundamental research would be provided by the (presently) advanced countries, and China would be free to use its scarce research funds to focus more closely on applications.

    To answer these questions concerning relatively low citation rates of Chinese-authored articles, more citation mining types of bibliometrics analyses are required. The nature of the Chinese-authored papers in both the SCI and non-SCI journals needs to be explored. Some types of qualitative analyses are required to understand the quality and category of development of published papers.

    First, the Chinese-authored papers would need to be identified in the SCI-accessed journals, in the non-SCI English-language journals, and in the non-SCI non-English language journals (mainly, but not exclusively, Chinese). Then, a sampling would have to be read, and qualitative metrics assigned to each article. Such metrics are discussed in more detail in Kostoff, 2005b. It is strongly recommended that future text mining studies on nanotechnology include these qualitative analyses, especially for papers in the non-English literature.”

    Further, there is a measure of quality in the SCI publications. The SCI accesses the premier research journals globally. These journals, for the most part, are peer-reviewed, and therefore apply some measure of quality filtering in what they publish. So, the papers accessed by our query from the SCI can be viewed as having, on average, some reasonable threshold of quality.

    John also raises the issue of national self-selection, my interpretation of which relates to country self-citations. We have found from past text mining/ bibliometric studies that there tends to be a country self-citation bias. I had hoped to use this information to identify key Chinese journals not in the SCI that publish nanotechnology papers. That would have provided some indication of what we missed by not going beyond the SCI. I examined about fifty Chinese-authored papers manually, by going through each reference. Almost none were to papers in Chinese non-SCI journals. Because of the limited information provided in the SCI references about each reference, I could not easily determine whether Chinese national authors were referencing other Chinese national authors in the English language journals. Based on the little data that I found, it did not seem to me that the Chinese national authors of SCI nanotechnology papers were referencing their fellow country-people unduly. What their practices in the non-SCI Chinese language papers are I cannot answer.

    3. Christine Peterson’s comments

    I didn’t address the issue of percentage of Chinese or Japanese journals in English.

    4. Richard Jones’ comments

    I agree with all of Richard’s comments completely until the last two sentences, and only partially or less with these. Citations are a good starting point, and on average the best work probably receives high citations, but interpretation can be complex. Supporting measures and analyses are needed for definitive conclusions; the China nanotechnology results are a good example.

    Our nanotechnology definitions are listed as unduly restrictive, arbitrary, and conservative. Our operational definition was the 92 term query that we generated using an iterative relevant feedback technique. I thought it was pretty comprehensive, but more can always be added. It would have helped had Richard identified how he would have modified the query to make it more inclusive. This would provide us with more of a tangible basis for further discussion.

Leave a Reply