Side-by-side comparison of AI visibility scores, market position, and capabilities
AI patent drafting and prosecution platform serving 200+ IP teams including Siemens and DLA Piper; $12M Series A from 20VC with 25% MoM growth competing with PatSnap for IP law AI.
Solve Intelligence is an AI-powered patent drafting and prosecution platform that automates the most time-consuming workflows in intellectual property law — generating first-draft patent applications, responding to patent office actions, creating claim charts for infringement analysis, and assisting patent prosecutors with the research and document creation that consumes significant attorney time. Founded in 2023 in the UK, Solve Intelligence raised $15.5 million total including a $12 million Series A led by 20VC (Harry Stebbings' fund), serving 200+ IP teams globally including Siemens, Avery Dennison, and DLA Piper with millions in ARR and 25% month-over-month revenue growth.\n\nSolve Intelligence's AI analyzes invention disclosures and prior art to generate patent application drafts (including claims, description, and drawings descriptions) that patent attorneys review and refine rather than writing from scratch — significantly reducing the hours required per application filing. The office action response tool analyzes USPTO and EPO examiner rejections and generates the legal arguments and claim amendments most likely to overcome each rejection, drawing on the AI's knowledge of prosecution strategies and patent law. This is particularly valuable given the shortage of qualified patent attorneys relative to the volume of patent applications.\n\nIn 2025, Solve Intelligence competes in the legal AI market for intellectual property with PatSnap (patent analytics), Anaqua (IP management software), and general legal AI platforms like Harvey AI for AI-powered IP law workflows. Patent prosecution is an attractive AI application because it is document-intensive, rule-based (following USPTO/EPO procedural requirements), and highly repetitive (similar document types with different technical content) — characteristics that suit current AI capabilities well. The 25% month-over-month growth validates strong market pull. The 2025 strategy focuses on expanding beyond UK/European IP firms to US patent firms, deepening claim chart automation for patent licensing teams, and adding prosecution analytics that help firms track office action response success rates.
Serverless GPU cloud platform for AI/ML with Python-native deployment and per-second billing; developer-favorite scaling from zero competing with Replicate and Beam for AI compute.
Modal is a serverless cloud computing platform purpose-built for AI and machine learning workloads — providing on-demand GPU compute that scales instantly from zero with per-second billing, container management, distributed training support, and a Python-native developer experience that makes running ML workloads in the cloud feel as simple as running code locally. Founded in 2021 in New York City and backed by Redpoint Ventures and other investors, Modal has grown rapidly as AI development has accelerated demand for flexible, developer-friendly GPU infrastructure.\n\nModal's developer experience is its primary differentiator — engineers write Python functions decorated with @modal.function() and deploy them to the cloud with a single command, with Modal handling container building, GPU provisioning, auto-scaling, and execution. The platform supports training jobs that need distributed compute across multiple GPUs, model serving endpoints that scale to zero when unused (eliminating idle GPU costs), and batch inference jobs that process large datasets. The per-second billing model means developers pay only for actual compute time, not provisioned instances.\n\nIn 2025, Modal competes in the AI infrastructure market with Replicate, Beam, Banana, and major cloud providers' managed ML services (AWS SageMaker, Google Vertex AI, Azure ML) for serverless GPU compute. The market for AI-specific cloud infrastructure has grown dramatically as the number of ML engineers deploying models to production has expanded — traditional cloud providers require significant DevOps expertise to use GPU instances effectively, while Modal's Python-native approach reduces the barrier to entry. Modal has attracted a strong developer following among AI researchers and ML engineers building production AI applications. The 2025 strategy focuses on growing the developer community, adding enterprise features (dedicated GPU capacity, private networking, compliance), and expanding the hardware options available (H100 GPUs, custom accelerators).
Monitor how your brand performs across ChatGPT, Gemini, Perplexity, Claude, and Grok daily.