Competitive Analysis¶
Last updated: 2026-04-03
Executive Summary¶
Litseer occupies an underserved niche: CLI-first, local-first, open-source literature search with citation snowballing, deduplication, and quality scoring. Most tools in this space are either visualization-first (Litmaps, Connected Papers), screening-first (ASReview), or behind institutional paywalls (Clarivate, Elsevier).
An honest limitation: litseer and most competitors draw from the same open data providers (OpenAlex, Semantic Scholar, CrossRef). Litseer's additional sources are aerospace-specific (NASA NTRS, SKYbrary, IEEE, AIAA/SAE). Paywalled indexes (Scopus, Web of Science) are structurally inaccessible to open-source tools — commercial companies can license them, but that access cannot be redistributed.
The landscape is fragmented across 235+ tools (per the Systematic Review Toolbox), most of which are abandoned PhD projects. The tools that survive either do one step well (ASReview for screening, VOSviewer for visualization) or are behind paywalls (Clarivate, Elsevier). Nobody owns the whole pipeline.
Core framing: Litseer is a search engine, not an analyst. It solves the hard problem of systematic, reproducible, multi-source paper discovery and produces structured output that any analysis tool can consume — LLMs (Claude, GPT), R packages (bibliometrix), screening tools (ASReview), visualization (VOSviewer), or the researcher's own judgment. The analysis layer is deliberately external and interchangeable.
Capability Comparison¶
| Capability | Litseer | Paperfetcher | Connected Papers | Elicit | ASReview | Litmaps | ResearchRabbit |
|---|---|---|---|---|---|---|---|
| Multi-source search | 8 sources¹ | 2 | No | Partial | No | 3² | No |
| Snowballing | Yes | Yes | Visual only | No | No | Sort of | Sort of |
| Dedup | Yes | No | No | No | Yes | No | No |
| Quality scoring | Yes | No | No | No | No | No | No |
| CLI/automation | Yes | Partial | No | API only | Partial | No | No |
| Local-first | Yes | No | No | No | Yes | No | No |
| Open source | Yes | Yes | No | No | Yes | No | No |
| Aerospace-specific | Yes | No | No | No | No | No | No |
| Citation graph | DuckDB + planned 3D/WebXR | No | Visual (2D) | No | No | Visual (2D timeline) | No |
| Structured LLM output | Yes | No | No | N/A | No | No | No |
¹ Litseer and Litmaps share the same three open providers (OpenAlex, Semantic Scholar, CrossRef) as their core academic backbone. Litseer's additional sources are aerospace-specific (NASA NTRS, SKYbrary, IEEE Xplore, AIAA/SAE via CrossRef member filtering). Neither tool has access to paywalled indexes (Scopus, Web of Science) — a structural limitation of any open-source or small-budget tool. Commercial companies can license these indexes; open source cannot redistribute that access.
² Litmaps uses the same three open providers but presents them as a single unified index rather than federated search.
Category Analysis¶
1. Open-Source Literature Search Tools¶
ASReview (Utrecht University) — AI-aided screening using active learning. Strong PRISMA integration. But it's not a search tool — you import results from elsewhere. Complementary to litseer (litseer searches, ASReview screens).
Paperfetcher — Python package for handsearch and citation snowballing. Closest open-source comparison, but limited to 2 sources (CrossRef, PubMed), no dedup, no quality scoring.
Snowballing (JoaoFelipe) — Python + Jupyter for Wohlin snowballing methodology. Requires significant manual work. Litseer automates what this makes semi-manual.
Papis — CLI-first bibliography manager. Shares litseer's "local-first, CLI-first" philosophy but is a reference manager, not a search engine.
PyAlex — Thin Python wrapper for OpenAlex API. Building block, not a complete tool. Note: OpenAlex now charges $1/day free usage, then paid — validates our cache decision.
Litsearchr (R package) — Semi-automatic search strategy development using keyword co-occurrence networks. Could be upstream of litseer (develop keywords, then feed to litseer for execution).
2. Citation Graph / Bibliometric Tools¶
VOSviewer (Leiden University) — Industry standard for bibliometric visualization. Free Java desktop app. But not a search tool — requires pre-exported data. Litseer's export could feed VOSviewer.
CiteSpace (Drexel) — Temporal citation network analysis, burst detection. Java desktop app. Complements litseer's graph but doesn't replace it.
Bibliometrix/Biblioshiny (R) — Comprehensive bibliometric analysis. R-only. Analysis layer that could consume litseer's output.
Connected Papers — Beautiful visual maps from a single seed. But single-seed only, no batch, no API, 5 free graphs/month. Litseer's snowball is the automated equivalent.
ResearchRabbit — "Spotify for papers." Acquired by Litmaps (May 2025), now powered by Litmaps technology. Relaunched October 2025 with citation maps and premium features. Still GUI-first, no API, no structured export.
Litmaps — 2D timeline visualization, living review alerts. Freemium ($10/month). Better visualization than litseer, but less automation and multi-source coverage. Education $10/mo, Commercial $40/mo; free tier limited to 2 maps/month, 100 articles. Acquired ResearchRabbit (May 2025), combined user base 2M+. Now the dominant player in visual citation discovery.
Inciteful — Multi-seed exploration, "bridge paper" detection. Web-only, no API.
arXiv Bibliographic Explorer (arXivLabs) — Open-source browser overlay that displays citation trees directly on arXiv paper pages. Enables discovery through citation navigation without leaving arXiv. Lightweight single-source approach vs litseer's multi-source pipeline. Source: https://github.com/mattbierbaum/arxiv-bib-overlay
Influence Flower (CMU) — Visualizes citation influence flows between papers, authors, institutions, and topics as a radial "flower" diagram. Different visualization paradigm from force-directed graphs (litseer) or coupling maps (Connected Papers). Interesting for showing directional influence rather than similarity clustering. Source: https://influencemap.cmlab.dev/
3. Systematic Review Automation¶
Covidence — Gold standard for SR management. But no search capability, expensive ($240/review). Project management layer, not search.
Rayyan — Fast screening with AI assistance. Complementary to litseer.
EPPI-Reviewer — Most feature-complete SR tool. But complex, per-project pricing.
otto-SR — LLM-powered end-to-end SR (96.7% screening sensitivity vs 81.7% human). Research prototype. The "throw LLMs at everything" approach vs litseer's deterministic philosophy.
4. Commercial Research Intelligence¶
Web of Science / InCites (Clarivate) — Curated citation index back to 1900. Institutional pricing (tens of thousands/year). OpenAlex was built as the open replacement.
Scopus / SciVal (Elsevier) — Largest curated database. Now has Scopus AI with LLM natural-language queries. Institutional subscription required.
Dimensions.ai — Links publications, grants, patents, clinical trials, policy documents. Free tier covers 97M+ publications. More affordable than WoS/Scopus. Could be a future litseer source adapter.
Lens.org — Uniquely combines patent + scholarly literature search. 291M+ records. CC-BY licensed data. Free for individuals. Patent-paper citation linking is unique and valuable for aerospace/defense. Strong candidate for a future litseer adapter.
Semantic Scholar — 225M+ papers, 2.8B+ citation edges. Already a litseer source. SPECTER2 embeddings and Recommendations API could enhance snowballing.
5. AI-Powered Research Tools¶
Elicit — Literature matrix extraction using LLMs. Good for structured data extraction from papers. But not reproducible, depends on LLM quality.
Consensus — "Academic search engine" for finding scientific consensus. Different use case (answers questions vs building corpora).
Scite.ai — Smart Citations showing support/contradict/mention context. Novel but requires full-text analysis at scale. Also an arXivLabs partner, displaying citation context directly on arXiv pages.
Perplexity (Academic mode) — Fast exploration with citations. Not reproducible, no structured output. Fundamentally different philosophy from litseer.
6. Domain-Specific / Aerospace¶
NASA NTRS — 500K+ aerospace citations. Already a litseer source adapter (a differentiator — very few tools integrate NTRS).
DTIC — DoD research repository. Access restricted but high-value. Potential future adapter for credentialed users.
SKYbrary — Aviation safety knowledge base. Already a litseer source adapter.
Bauhaus Luftfahrt Trend Monitor — AI-powered strategic foresight for aviation. Uses LLM multi-agent systems for STEEP trend identification and scenario matrices. Top-down foresight approach (macro trends applied to aviation) rather than bottom-up domain analysis. Complementary to litseer: Trend Monitor identifies what to look for, litseer does the looking in the actual literature.
7. Teaching Tools¶
The gap is real. No dedicated tool exists for teaching students how to do systematic literature reviews. Universities rely on static LibGuides and mentorship. This is a greenfield opportunity.
Top Competitor Deep Dive¶
Connected Papers¶
What they do well: - Gorgeous force-directed graph from a single seed paper using bibliographic coupling - Instant visual intuition — you see the field structure in seconds - Uses Semantic Scholar data (225M+ papers, 2.8B+ citation edges) - Prior/derivative works view separates foundational vs follow-up papers - Zero setup, zero learning curve — paste a DOI, get a graph
What they don't do well: - Single-seed only — can't combine multiple starting papers - Limited free graphs/month, paid tiers for heavier use - No API, no CLI, no batch processing - No export beyond PDF/PNG of the graph itself (no structured citation data) - No deduplication, no quality scoring, no snowballing methodology - Similarity metric (bibliographic coupling) is opaque — no user control - Can't build a cumulative knowledge base across sessions
Litseer advantage: Multi-seed, multi-source, automated snowballing with structured export. Connected Papers is great for a quick visual scout; litseer is for the systematic campaign.
Litmaps¶
What they do well: - Interactive 2D citation timeline maps (date × citation count) — high-impact recent papers cluster top-right, giving instant visual intuition - Living review monitoring with daily alerts for new papers (Pro tier) - Multi-seed maps from multiple papers (unlike Connected Papers) - Export: BibTeX, RIS, CSV — covers major reference manager formats - Collaboration features (share maps, team tier for institutional use) - 2M+ researchers after acquiring ResearchRabbit (May 2025) - Three data providers: Semantic Scholar + OpenAlex + Crossref (270M+ articles) - Zotero two-way sync (Pro tier) — strongest reference manager integration in this class of tool - "Discover" feature provides personalized paper recommendations - Semantic similarity search (AI text matching) added in redesign - LibKey integration for institutional full-text PDF access - Founded 2016 (Wellington, NZ), launched November 2020 by Kyle Webster (molecular biology PhD) and Axton Pitt (neuroscience BSc, now CEO) - ~$2.5M NZD total funding; $1M USD ARR as of 2025
What they don't do well: - No multi-database federation — three providers aggregated into a single index, not direct queries to domain-specific databases - No API — zero programmatic access, no automation possible - No CLI — GUI-only, cannot integrate into pipelines - No local-first option — all data on their servers, vendor lock-in - No deduplication tooling (single unified index reduces but doesn't eliminate dupes) - No quality scoring or venue classification - No structured output for LLM pipelines (BibTeX/RIS/CSV only) - No domain-specific sources (no NTRS, DTIC, SKYbrary, IEEE direct) - Free tier very restrictive: 2 maps/month, 100 articles/map, no date filtering - Date filtering paywalled — a basic research function locked behind Pro ($10–40/mo) - Results displayed in batches of 10 — tedious for large reviews - Small team (~$2.5M total funding) now running two brands (Litmaps + ResearchRabbit) - Mendeley/EndNote integration is BibTeX-only (no direct sync like Zotero)
Honest comparison: Litmaps and litseer use the same three open data providers (OpenAlex, Semantic Scholar, CrossRef). Litseer's additional sources are aerospace-specific (NTRS, SKYbrary, IEEE, AIAA/SAE) — meaningful for that niche but not a general-purpose advantage. Neither tool can access paywalled indexes like Scopus or Web of Science; a commercial company with licensing agreements could, and an open-source project structurally cannot redistribute that access.
Where litseer genuinely differs: CLI automation, local-first data ownership, structured LLM-ready output, cross-source deduplication, quality scoring, and snowballing as a first-class operation. These are architectural choices, not data advantages.
Visualization: different ambitions, not absent. Litmaps does 2D timeline scatter plots (date × citations). Litseer's planned visualization (ADR-009) is architecturally different: a 3D/WebXR spatial graph where researchers can walk the citation network, open a referenced paper directly and see the key fact highlighted at the citation site, verify citation appropriateness in context, and visually detect missing clusters (sparse regions rendered as visible gaps rather than invisible absence). The metaphor is spatial exploration of a citation landscape — closer to "Minority Report for citations" than to a scatter plot. This is future work (not shipped), but it targets a fundamentally different interaction model from anything Litmaps or Connected Papers offer: navigating through the literature rather than looking at it from above.
What litseer should learn from Litmaps: Living review monitoring (daily alerts for new papers matching criteria) is a strong feature worth adopting. Zotero sync is table-stakes for researcher adoption.
Elicit¶
What they do well: - Natural-language queries return structured literature matrices - LLM-powered data extraction from papers (methods, findings, sample sizes) - Good for rapid evidence synthesis — "what do studies say about X?" - API available for programmatic access - Well-funded (spun out of Ought, a nonprofit AI research lab), active development
What they don't do well: - LLM-dependent — results are non-deterministic and non-reproducible - Black box: can't audit which databases were searched or how - No citation snowballing methodology - No multi-source federation (unclear exactly which sources) - No local-first option — all processing on their cloud - Expensive for heavy use (credits-based pricing) - Fundamentally different philosophy: AI answers vs evidence collection
Litseer advantage: Deterministic reproducibility, auditable search provenance, no LLM dependency. Elicit conflates two separate problems: (1) finding the right papers (search/discovery) and (2) extracting structured data from papers (comprehension). Litseer solves #1 deterministically. For #2, researchers can use Claude, GPT, or any LLM on the corpus litseer collected — and get better results because the corpus is complete and auditable.
Elicit's original moat — "easy button for structured paper analysis" — has eroded as general-purpose LLMs with PDF upload (Claude, ChatGPT) now do the same extraction with more flexibility and transparency. The piece that remains genuinely hard is systematic, reproducible, multi-source search. That's litseer's lane.
ASReview¶
What they do well: - State-of-the-art active learning for screening prioritization - Published in Nature Machine Intelligence — strong academic credibility - Open source (Apache 2.0), well-maintained, Utrecht University backing - PRISMA flow diagram generation - Extensible ML model framework (plug in custom classifiers) - Simulation mode for benchmarking screening algorithms - Supports RIS, CSV, TSV, Excel import
What they don't do well: - Not a search tool at all — you must import results from elsewhere - No multi-source federation, no snowballing, no deduplication - No citation graph or bibliometric analysis - Desktop app (Python/Electron) — no CLI pipeline integration - Single-project model — no portfolio or accumulated knowledge base
Litseer advantage: Litseer does what ASReview can't (search + collect) and ASReview does what litseer won't (ML screening). These are the most naturally complementary tools in the space. Pipeline: litseer → BibTeX → ASReview.
Scite.ai¶
What they do well: - Smart Citations: classifies each citation as supporting, contrasting, or mentioning - Citation context extraction — shows the exact sentence where a paper is cited - Large-scale full-text citation statement analysis - arXivLabs integration — citation context visible directly on arXiv - Reference check tool — paste your manuscript, get citation quality report - API available
What they don't do well: - Not a search/discovery tool — analyzes existing citations, doesn't find new papers - Expensive ($20/month individual, institutional pricing) - Proprietary — no open-source component - Full-text dependent — can't analyze papers it doesn't have full text for - No snowballing, no multi-source federation, no CLI
Litseer advantage: Different layer entirely. Scite's citation context classification is a feature litseer could consume (via API) to enrich its citation graph with support/contradict edges. Worth considering as a future enrichment source rather than a competitor.
CORE¶
What they do well: - 284M+ open access articles from 14K+ global data providers - Unique coverage of institutional repositories and grey literature - Free API with bulk dataset access (FastSync for updates) - Discovery plugin finds OA versions of paywalled papers - Duplicate/version detection across repositories - Not-for-profit, open scholarship aligned
What they don't do well: - Search interface is basic — no advanced query syntax comparable to Scopus/WoS - No citation graph or bibliometric analysis - No snowballing, no screening tools - Metadata quality varies (depends on source repositories) - No CLI tool — API-only programmatic access - Commercial API pricing not transparent
Litseer advantage: CORE is a strong future source adapter candidate for litseer. Its institutional repository coverage fills a gap that OpenAlex and Semantic Scholar miss. The dedup/version detection could inform litseer's own deduplication logic.
Inciteful¶
What they do well: - Multi-seed paper discovery — start from 2+ papers, find the field between them - "Literature Connector" finds bridge papers between two research areas - Graph-based exploration with similarity scoring - Free (no paywall)
What they don't do well: - Domain now redirects to incitefulmed.com — possibly pivoting to medical focus - Web-only, JavaScript-heavy SPA, no API - No structured export, no CLI, no automation - No multi-source federation - No snowballing methodology, no dedup, no quality scoring
Litseer advantage: Inciteful's "bridge paper" concept is interesting for litseer's graph analysis — finding papers that connect otherwise separate citation clusters. The redirect to a medical domain suggests possible abandonment of the general tool.
Visualization Landscape: What Works and What Doesn't¶
Every citation visualization tool in the current landscape makes a tradeoff between legibility and truthfulness. Understanding these tradeoffs is essential for designing litseer's spatial visualization (ADR-009).
What existing tools actually show you¶
Litmaps chose temporal legibility over relational structure. The 2D scatter (date × citation count) is instantly readable — researchers intuitively grasp "top-right = high-impact recent." But node position tells you nothing about how papers relate to each other. Two thematically unrelated papers published in the same year with similar citation counts appear adjacent, which is misleading. Citation edges become visual noise past ~50 papers.
Connected Papers chose relational structure over temporal legibility. Bibliographic coupling determines node proximity — papers that share many references cluster together. This gives the strongest "aha moment" of any tool: you see field structure in seconds. But the positions are suggestive, not metrically reliable — "proximity implies shared neighborhood density, not specific similarity" (Venturini, Jacomy & Jensen, 2021). Single-seed only, no cumulative exploration.
VOSviewer chose mathematical rigor over usability. The VOS algorithm produces metrically meaningful node positions (minimizes weighted sum of squared Euclidean distances). The overlay visualization (coloring nodes by year or citation count on top of relational layout) is the best 2D compromise. But it's an analysis endpoint, not a workflow tool — import pre-exported data, generate a static image, done. Java desktop app, no live interaction with an ongoing search.
CiteSpace chose comprehensiveness over clarity. Temporal burst detection (identifying when references experience citation spikes) is unique and valuable. But encoding network structure, time, burst intensity, and cluster membership simultaneously exceeds working memory. Experts can decode it; everyone else sees a wall of color and lines.
Four failure modes all 2D tools share¶
1. The hairball problem. As graph density increases, citation edges overlap into visual noise. A corpus of 500 papers with 5,000+ citation edges produces an illegible tangle in any 2D layout. Mitigation (edge bundling, threshold filtering, hierarchical aggregation) works but loses information to gain legibility. As one researcher put it: "Hairballs are the junk food of network visualization — they look complex but communicate nothing."
2. Meaningless node positions. In force-directed layouts, two papers equidistant in the citation graph can end up at very different visual distances due to global optimization. Researchers routinely misinterpret proximity as specific similarity. Venturini et al. (2021) documented this rigorously — there is poor correlation between Euclidean distance in the layout and actual network distance (shortest path, commuting time).
3. The absence problem. The most important failure for literature review: you cannot see what is not there. A gap in the literature — an unstudied intersection of two fields — appears as empty space indistinguishable from layout algorithm spacing. No current tool makes absence visible. This is the single most underserved design opportunity in the space.
4. Temporal flattening. Most graph layouts collapse time. A seminal 1990 paper and a recent 2024 paper appear adjacent if they share citations, with no visual indication of the 34-year gap. Litmaps avoids this by using date as the x-axis, but at the cost of losing relational positioning. No tool handles the time-structure tradeoff well.
What the research says about 3D/spatial approaches¶
The visualization literature supports specific advantages of spatial/3D for network data, with honest caveats:
- Path tracing is empirically easier in 3D (Ware & Mitchell, 2008). Following citation chains through multiple generations — the core snowball task — benefits directly from the added dimension.
- VR improves long-term recall by 8.8% (Krokos et al., 2018, "Virtual Memory Palaces"). Spatial encoding in the hippocampus means researchers remember where things were in the graph. "The film cooling paper was in the lower-left cluster, near the heat transfer junction" — this spatial memory aids retrieval during writing in a way flat lists cannot.
- 3D preserves more relational structure. Citation similarity spaces are high-dimensional. Projecting to 3D loses less information than projecting to 2D, allowing better cluster separation.
- Immersive environments resolve the viewport constraint (Marriott et al., 2018, "Immersive Analytics"). A 2D screen is a fixed window; an immersive environment surrounds the user. A 2,000-paper graph that is illegible on screen can be distributed through a volume the researcher physically navigates, using level-of-detail rendering (clusters collapse at distance, expand on approach).
Honest limitations of 3D/spatial: - Occlusion — nodes hide behind other nodes. Head-tracked stereo helps (lean to look around) but doesn't eliminate it. This is 2D's primary retained advantage. - Precision vs. recall tradeoff — a 2024 comparative study found lower interpretation accuracy but improved long-term recall in VR. For tasks like "which paper has more citations?" a 2D table wins. For "where was that paper about thermal barrier coatings?" the spatial layout wins. - Text legibility — reading abstracts in VR (Quest 3: ~25 PPD) is harder than on a 2D screen. ADR-009's three-level navigation (constellation → card → detail) is the right design response — text appears only at appropriate zoom levels.
The Minority Report gap is now hardware, not software¶
The gestural interface from the 2002 film (designed with MIT's John Underkoffler) required $50K+ custom installations. As of 2025–2026:
| Film element | Current status | Hardware |
|---|---|---|
| Mid-air hand gesture control | Production-ready | Quest 3, visionOS |
| Gaze-directed selection | Production-ready | visionOS (primary input), Quest 3 |
| Large transparent displays | Shipping product | Vision Pro, Quest 3 passthrough |
| Voice + gesture integration | Shipping | visionOS unified input system |
| Collaborative multi-user | Early production | visionOS shared spaces, Horizon |
The gap is no longer hardware capability. It is interaction design — nobody has built the right software to make gestural exploration of complex data feel natural and productive. The hardware is $500 (Quest 3), not $50,000. What's missing is domain-specific software that understands citation networks, reference verification, and gap detection.
Where litseer's spatial visualization aims to differ¶
Litseer's ADR-009 targets capabilities no existing tool offers:
- Navigate through the literature, not look at it from above. Open a referenced paper directly at the citation site and see the key fact highlighted — what exactly does paper A say about paper B?
- Make absence visible. Render sparse regions between clusters as visible voids (fog, grid lines, translucent markers). The researcher sees the gap and asks "why is nothing there?" rather than not noticing it.
- Citation verification in context. At the detail level, show whether a citation actually supports the claim it's attached to — a capability that connects to litseer's planned citation verification feature.
- Progressive enhancement. Same API serves 2D browser dashboard and 3D/WebXR immersive view. The precise comparison tasks stay on desktop; the exploration and spatial memory tasks go to VR.
This is future work, not shipped code. But it targets a fundamentally different design space from the 2D scatter plots and force-directed graphs that dominate the current landscape.
Why Academia Hasn't Built Better Tools¶
- PhD graveyard — tools get built as dissertations, published, abandoned
- Nobody owns the whole pipeline — search, screen, analyze are separate tools
- "Good enough" trap — manually screening 3,000 abstracts is normalized
- R/Python split — best bibliometric tools are R, excluding engineering/CS
- Commercial capture — Clarivate/Elsevier control data behind paywalls
- Funding misalignment — grants fund building, not maintaining
Market Gaps Litseer Can Fill¶
- CLI-first, automation-friendly, multi-source academic search — nobody does this
- Local-first with no vendor lock-in — rare in a SaaS-dominated space
- Aerospace-specific source federation — unique (NTRS + SKYbrary + IEEE + AIAA/SAE)
- Structured output for LLM consumption — most tools embed LLMs or produce human-only output; litseer produces clean data for external LLM pipelines
- Teaching systematic review methodology — greenfield
- Living/continuous systematic reviews — no good automation tooling exists (IArxiv does daily AI-sorted arXiv delivery but single-source, email-only)
- Patent + scholarly + citation graph in one open tool — Lens.org does parts but is not open source or automatable
Future Source Adapter Candidates¶
- Lens.org — patent + scholarly combined (free tier, CC-BY data)
- Dimensions.ai — grants + clinical trials + policy linkage (free tier)
- DTIC — defense technical reports (restricted access)
- Google Scholar (via Publish or Perish patterns) — broadest coverage including grey literature
- arXiv — preprint-specific, important for CS/physics crossover. arXivLabs ecosystem (Bib Explorer, Scite, Connected Papers, Litmaps) shows strong community demand for citation-aware tools on top of arXiv data
CORE (core.ac.uk) — aggregates 300M+ open access papers from 11K+ global repositories. Unique coverage of institutional repositories and grey literature that other sources miss. Free API. arXivLabs partner
References¶
- ASReview: https://asreview.nl/
- Paperfetcher: https://paperfetcher.github.io/
- VOSviewer: https://www.vosviewer.com/
- Bibliometrix: https://www.bibliometrix.org/
- Connected Papers: https://www.connectedpapers.com/
- ResearchRabbit: https://www.researchrabbit.ai/
- Litmaps: https://www.litmaps.com/
- Inciteful: https://inciteful.xyz/
- Elicit: https://elicit.com/
- Consensus: https://consensus.app/
- Scite.ai: https://scite.ai/
- Lens.org: https://about.lens.org/
- Dimensions.ai: https://www.dimensions.ai/
- Systematic Review Toolbox (PMC): https://pmc.ncbi.nlm.nih.gov/articles/PMC9713957/
- OpenAlex Pricing: https://help.openalex.org/hc/en-us/articles/24397762024087-Pricing
- Venturini, Jacomy & Jensen (2021). "What do we see when we look at networks." Big Data & Society. doi:10.1177/20539517211018488
- Ware & Mitchell (2008). "Visualizing Graphs in Three Dimensions." ACM TAP 5(1). doi:10.1145/1279640.1279642
- Krokos, Plaisant & Varshney (2018). "Virtual memory palaces: immersion aids recall." Virtual Reality. doi:10.1007/s10055-018-0346-3
- Marriott et al. (2018). "Immersive Analytics." Springer. doi:10.1007/978-3-030-01388-2
- Graph2VR (2024). "Visualization and exploration of linked data using VR." Database. doi:10.1093/database/baae008
- Litmaps acquires ResearchRabbit (2025): https://business.scoop.co.nz/2025/05/08/nz-startup-litmaps-acquires-us-rival/
- ResearchRabbit 2025 revamp (Aaron Tay): https://aarontay.substack.com/p/researchrabbits-2025-revamp-iterative