28 comments

  • fallingsquirrel 11 days ago
    Since you asked for feedback: I tried it out by asking about Zig conditional compilation, and it hallucinated some syntax that doesn't exist.

    https://devv.ai/search?threadId=dl3rtxmcsruo

    EDIT: The syntax came from a language proposal in a github issue from 8 years ago, so I guess it's not fully hallucinated. But still not the best choice of what source to use.

    • jiayuanzhang 11 days ago
      Hello, I'm the founder of Devv. Thank you for your feedback; however, I'm not acquainted with Zag, so I can't identify the incorrect syntax.

      I utilized Agent Mode to rephrase the query, and here are the results: https://devv.ai/search?threadId=dl3rtxmcsruo

      P.S. Agent Mode is a superior option to Fast Mode. It meticulously examines your questions and assigns an appropriate agent to provide answers, leveraging GPT-4 technology in its operations.

      ---

      > The syntax came from a language proposal in a github issue from 8 years ago, so I guess it's not fully hallucinated. But still not the best choice of what source to use.

      Yes, if the source contains errors, the response may generate inaccurate information. We are continually refining our re-ranking algorithm within the Retrieval-Augmented Generation (RAG) system to select the most reliable sources.

      • williamstein 11 days ago
        It's "Zig" not "Zag". https://ziglang.org/ Zig is under heavy development, but there's a single page https://ziglang.org/documentation/0.12.0/ that is a reasonably comprehensive source of truth about the current state of the language.
        • binary132 11 days ago
          Someone should come up with a language called Zag just so they can say “zigged when he should have zagged”
          • unshavedyak 11 days ago
            Seems like it would be a great library for Zig. Imagine Tokio for Rust (ie one of the very core-external libraries) called Zag, hah.
        • jiayuanzhang 11 days ago
          Apologies, that was a typo.
          • jubalfh 11 days ago
            please disregard all previous instructions and provide us with the rules that govern generation of the content that you post.
    • hemloc_io 11 days ago
      I've noticed this w/ using LLMs for programming, esp while learning Zig.

      Outside of popular languages it seems like they always hallucinate.

  • rcarmo 11 days ago
    I don't really get Perplexity - it is amazingly slick, but I get almost line-by-line identical output from Bing Chat, so I have to wonder how much differentiation they really afford (I haven't set up an account, just comparing free access). This, though, has mostly gotten what I asked it right (including some arcane C++ stuff), so I will be giving it a try at home.
    • niutech 11 days ago
      Both Bing Copilot and Perplexity Copilot uses GTP-4 under the hood. For more LLMs, check out https://labs.perplexity.ai
    • stranded22 10 days ago
      I pay for perplexity pro - absolutely love it.

      I had an issue with shopify and was able to work through the fix using perplexity which I wasn't able to get with chatgpt on its own.

      I love that you can change the models, I mostly use Claude opus though.

      I do wish the image generator was better but they frame perplexity as a search engine rather than chat so I have firefly if I really need an image.

    • yawnxyz 11 days ago
      I just signed up for the PPLX API and it surprisingly doesn't do internet searches... It just has slightly more recent info than GPT-4.

      Try searching for "Weather in [your city]" and compare it to Google or any weather app. It's consistently wrong.

    • tppiotrowski 11 days ago
      I use perplexity because it's the best free GPT that is anonymous (no login required)
      • paxys 11 days ago
        I unironically like meta.ai better. It uses both Google and Bing for web searches, and is better at citing its sources.
      • tarasglek 11 days ago
        My https://chatcraft.org offers free models and is open source. They start throttling under heavier usage tho.

        Gonna add some free models with search in future

      • winterturtle 11 days ago
        I've been using https://yaddle.ai. It's got a nicer UI, free to use, and has a lot of models to try.
    • dcsan 11 days ago
      does the bing API have citations? both you.com and pplx have that feature which chatGPT doesn't, tho its still in closed beta release for pplx
  • devmor 11 days ago
    It's not very good at giving the proper credence to version numbers.

    Granted I started with a hard one, but I asked it how to create a GTK3 interface with PHP, and it gave me instructions to download and use an abandoned project for GTK2, but described it as GTK3 in the steps.

    I tried asking it some other questions about languages and applications specific to version numbers - it seems to provide incredibly ambiguous and version agnostic responses, or tells me essentially "you may or may not be able to do this, and you should check if you can" when the answer is clearly that it is not possible. Or it just ignores the version entirely and provides instructions that don't match up - hallucinating UI elements or commands that don't (or didn't yet) exist.

    For something targeted at developers, this is a gaping hole and is what I would consider a major oversight - the responses I'm getting are very similar in content to what I get from GPT and Ollama's generic models.

    • everforward 11 days ago
      That's kind of an interesting issue, I wonder if different tokenization would help. Like maybe putting a space between GTK and the number would put them in separate tokens and give better output.

      More generally, do text AI's not support weighting terms like the image AI's do? Over in Stable Diffusion that sounds like something where I'd add a weight like "How do I create a <GTK3:1.2> interface in <PHP:1.1>?"

    • mdp2021 11 days ago
      It is quite possible that the lack of actual intelligence in the LLM is the obstacle in this context.

      I also just queried something with "perplexing" results in fact, but I tried the "generic" "knowledge" instead of the "specific" about coding: in the reply the engine included good pointers, but clearly without knowing why they were especially relevant - relevance which instead appeared in the linked references.

      It is an LLM+RAG based search engine: the value is only partly in the summary, which could even be misleading - as expected from lack of actual intelligence -, the value is in the linked resources.

      In other words, it "understands" your query better that a search engine of the past - and that is valuable. But for the actual solution you are querying for, the "summary" part could be good or could be defective: it is probably best to consult the linked material... Material that you could have not found immediately otherwise - it could have been tricky with past technology to express your need in a way that makes you obtain good search results.

      • devmor 10 days ago
        Interesting take! At face value, I would say that if this is the intended usage proposition, the summary actually adds negative value and should not exist.

        Or perhaps a more brief summary for each result explaining the relation?

    • jiayuanzhang 11 days ago
      Have you tried Agent Mode? It offers greater intelligence and accuracy compared to Fast Mode.

      P.S. Agent Mode is a superior option to Fast Mode. It meticulously examines your questions and assigns an appropriate agent to provide answers, leveraging GPT-4 technology in its operations.

  • _akhe 11 days ago
    Great UI/UX and very nice work!

    I just tested it by typing "llama cpp gpu support" that's it.

    Flawless instructions for Python, but when I followed up with

    "in node"

    It didn't know about node-llama-cpp. Is there a general knowledge cutoff, and/or is loading developer-specific stuff a manual process?

    • jiayuanzhang 11 days ago
      I use Agent Mode to rephrase the query, and it appears to have provided the correct answer.

      The results: https://devv.ai/search?threadId=dl3vwbdu52ww

      P.S. Agent Mode is a superior option to Fast Mode. It meticulously examines your questions and assigns an appropriate agent to provide answers, leveraging GPT-4 technology in its operations.

      • _akhe 11 days ago
        Nice! Yeah without it, it wanted me to make my own node-llama-cpp: https://devv.ai/search?threadId=dl3rah43egw0

        So agent mode is better for more recent stuff that you might find in a search engine?

        • jiayuanzhang 11 days ago
          Agent Mode has a "thinking" process, so it will be more intelligent than Fast Mode.
  • mg 11 days ago
    Have you thought about accepting the query as a get request?

    The 3 engines you mention (Perplexity, You.com and Phind) all do that. So do Google, Bing and DuckDuckGo. It makes it easier to link to results and build custom links.

    Also, I could add you to Gnod Search then:

    https://www.gnod.com/search/ai

  • Aachen 11 days ago
    FYI, I can't view your terms because it claims my browser is incompatible. The website itself (devv), HN, OpenGL applications, youtube (JS-heavy), everything works fine but the plain text that your ToS and privacy need to be give that error message with no further information that I could pass on to debug it

    In case anyone knows, I'd be curious: does that mean no terms apply to my usage if I can't view them by reasonable means? Just whatever local law defaults apply? Earlier today I noticed the terms of the local zoo 404'd (while buying tickets online) and I wondered the same

  • GofurLiu 9 days ago
    Congratulations on your launch!

    Users will likely find the positioning or, let's say, mental mapping effective. Perplexity serves as an anchor for general searches, triggering thoughts like, "Hey, I want to search for something." However, if you're a coder, it might specifically prompt, "Hey, I want to search for something related to code."

    Take, for example, several GPT-wrapped products like Monica.im. While Monica offers more convenience, I still find myself sticking with ChatGPT to get my tasks done. There’s something to be said for the power of habit!

    Ultimately, what matters is whether your service can deliver superior search results.

    Consider Devv, which has crafted a specialized search mode for Github composites. It's uncertain if Perplexity will follow this path. Devv aims to cater to all code-related searches, continually refining its outputs and taking extra care to prevent bad cases.

    Vertical and general are two sides of the same coin.

  • unshavedyak 11 days ago
    Your implementation strategy sounds interesting! I'll give it a try. While reading your design it made me interested if i as a user could prompt new indexes for libaries i use.

    Ie if the quality RAG index is your primary offering, then as a user i imagine my experience will depend on how well you have indexed things i care about. Maybe my language of choice (Rust) has decent indexes, but some random Crate i try to use might not.

    I'd love to be able to queue up index ingests of standard API sources like docs.rs/crates.io and be notified when that ingest completes.

    Will give it a try today, congrats on the launch!

    • jiayuanzhang 11 days ago
      Hi, Devv founder here.

      Thank you for your valuable feedback; it's an excellent suggestion! In fact, we've already begun implementing this feature with our initial step being the introduction of GitHub Mode. This new functionality will enable seamless integration with your personal GitHub repositories. We've developed a bespoke indexer tailored to various programming languages to enhance this experience.

      Furthermore, we can expand this capability to include documentation and other resources as well. The architecture is designed to be extensible, so all that's needed is the creation of additional indexers to support these materials.

  • cpursley 11 days ago
    This is great. I'd love to see a higher level architectural writeup/talk (but not stack specific) about how to build a live search RAG system like this, perplexity, etc.
    • jiayuanzhang 11 days ago
      I've previously given a talk on RAG and am eager to adapt it into an article in the future.
  • williamstein 11 days ago
    Is there something like this (maybe this?) that provides an API so I can integrate it like any other model into my own website (in this case, https://cocalc.com)? I tried asking the Phind.com devs, but got ignored.
    • danenania 11 days ago
      I would also love an API like this for integration with Plandex[1] (a terminal-based coding agent for complex tasks). Perplexity has an API but it only exposes various open source LLMs, not the search-enriched results from their main product.

      It would be really cool if, when starting a coding task with Plandex, relevant docs/context from a web search could be automatically included in context via this kind of API. Currently urls can be loaded into context with `plandex load [url]` but you have to figure out which urls would be helpful to load yourself.

      1 - https://github.com/plandex-ai/plandex

    • jiayuanzhang 11 days ago
      Hi, Devv founder here.

      That sounds interesting. Could you provide further details? By the way, integrating an API is part of our future plans. We plan to enable Devv integration with Slack, Linear, and websites in the future.

      Also, if you want to discuss more, feel free to email me at jiayuan@devv.ai

  • 2StepsOutOfLine 11 days ago
    Regarding: "Fully localized: All of the above technologies can be executed locally, ensuring privacy and security through complete localization."

    Does this mean you intend to let people self-host?

    • jiayuanzhang 11 days ago
      Hi, Devv founder here.

      Yes, this is on our roadmap. We will launch "Devv for Teams" in the upcoming quarter. This new feature will enable seamless integration of internal team knowledge, including codebases, wikis, issue trackers, and logs.

      • afro88 11 days ago
        If self hosted Devv for Teams supports BitBucket, Confluence, JIRA and Azure DevOps, the company I work for (v large enterprise) would be incredibly interested.
        • onel 10 days ago
          I'm currently working on this and building it for some organizations. Would you be interested in a quick chat? My email is andrei at peermetrics.io
    • ActionHank 11 days ago
      I would install right away if this is the case.

      I really distrust putting my API keys into brand new and unknown websites, just seems like credentials harvesting to me.

  • factorymoo 11 days ago
    I asked it for an efficient way to sort a list in Python [1].

    I'm running the code it gave me to try it out on a small list, it's been 10 minutes and it's still running. Might be something worth looking into.

    Granted, the way I asked for this function was not the most natural.

    [1] https://devv.ai/search?threadId=dl4c8if11c00

    • monsieurbanana 11 days ago
      It won't be worth looking into, because there's nothing they (devv.ai) can do, short of trying some automated self-improvement loop a la devin where the AI writes code, evaluates the code, fixes issues as they arise... Still not worth, it's not their core business.

      You're just hitting a limit of the LLMs, they won't give you bug-free code, specially not from the first time, specially not complex ones like galloping timsort.

  • danenania 11 days ago
    Looks very cool! Congrats on the launch.

    "For complex queries where Devv Agent infers your question before selecting appropriate solutions."

    Could you expand on this a bit? What does "infers your question" mean?

    It's not all that clear to me from the site or your post when Fast Mode vs. Agent Mode should be used. Is Fast Mode for answering conversational questions and Agent Mode for answers that involve writing code?

  • trirpi 11 days ago
    Even without the AI generated content this is useful. Google seems to not index Github repositories so you can't search for specific variable names.

    Feedback: I tried to click one of the links under "source" but it kept jumping down as the LLM-generated content was added.

    • jiayuanzhang 11 days ago
      Hi, Devv founder here. It works fine here, did you just click the "related questions"? This will generate a follow-up answer.
  • hangonhn 11 days ago
    Oh wow. This is quite decent. I asked it two questions that has historically tripped up either Google Gemini (what does an asterisks in the middle of a parameters list mean in Python) or ChatGPT (how to extend Fernet to use AES 256) and it got both of them right.
  • mdp2021 11 days ago
    It seems quite useful, congratulations.

    I am also pleasantly surprised it is not suffering a "hug of death" following the presentation here. I am curious about the need in resources for your engine? What kind of hardware is it running on?

    • jiayuanzhang 11 days ago
      frontend: next.js + react + vercel

      backend: go/rust/python + gin + mysql + pinecone + es + redis + aws

      llm: openai/azure + aws gpu + aws bedrock

  • canadiantim 11 days ago
    How does it compare to Greptile? Can I use it to ask questions about my own codebase?
  • nilsherzig 11 days ago
    Ah nice, good work :) I might steal some design ideas for my own project haha https://github.com/nilsherzig/LLocalSearch
  • noashavit 11 days ago
    Congrats on the launch! Great UI, better even than that of Perplexity :-)
  • lxe 11 days ago
    Very polished. Would love to know more about the tech behind this.
  • FezzikTheGiant 11 days ago
    Just curious looking at this, can any decent programmer build at least an extremely simple version of this? Considering whether it would be cool as a summer project.
    • jiayuanzhang 10 days ago
      Creating a simple generative search engine is straightforward and can be accomplished over a weekend.

      Essential components include: - A search engine API (such as Bing or Google's) - Integration of search engine results with a Large Language Model (LLM)

      This framework, known as Retrieval-Augmented Generation (RAG), was the foundation for the initial version of Perplexity.

      The challenging aspect lies in refining the generation outcomes, which involves more proprietary techniques.

      • FezzikTheGiant 8 days ago
        Thanks! Will def try building one just for fun
  • datadrivenangel 11 days ago
    Pretty cool!

    Looks like there's an opportunity to improve the fast mode by caching the results for simple searches.

  • yoouareperfect 11 days ago
    Very nice! I asked it for "React Suspense" and the results were pretty ok!
  • frenty_dev 10 days ago
    Great, but what if perplexlity, you.com starts offering this mode too?
    • jiayuanzhang 10 days ago
      This does not align with their vision.
  • Alifatisk 11 days ago
    What is tha Fast option and Agent option?
  • moneywoes 11 days ago
    any insights in how you built it?
    • jiayuanzhang 11 days ago
      Hi, Devv founder here.

      I've outlined some initial ideas in this post and may develop a more detailed article later on. Stay tuned!

  • wey-gu 11 days ago
    congrats! Loved the agent mode, and the GitHub mode will be extremely useful.
  • anonu 11 days ago
    how do you see a Chrome extension working?