7 comments

  • SahAssar 1473 days ago
    I think these sort of tools are detrimental to the security of apps and developers that use them.

    1. They often treat all vulns the same and do almost nothing to let a dev know if they are actually vulnerable or how bad a vuln really is (and no, just a pure CVSS is not good enough).

    2. They lead to the idea that fixing vulns is just updating the dependencies. If you have enough vulns in your dependencies that you need it visualized in your IDE then the problem is how you choose dependencies or how many you have, not keeping them up to date.

    3. Updates themselves are not always a safe bet since they bring in unknown code, so an update should not be treated as a silver bullet for fixing vulns without checking the code or actually trusting the authors.

    4. Just like some of snyks blogposts (like https://news.ycombinator.com/item?id=19255603) they lead to more hysteria and security theater than productive work.

    In my experience all this leads to security fatigue and people not taking the profession or the work done seriously.

    • bastijn 1473 days ago
      I'm mixed between nodding my head in this and finding arguments against it. For myself, I would absolutely see your points and agree to them but when I think about the engineers inside the company at scale I'm seeing it's worth. A large part of our company comes from hardware or more traditional console languages like c/c++/c#. Now switching to .net core, typescript and the whole open-source ecosystem is something they just are becoming familiar with. This extension would certainly make them aware of the impact of their choices. Raising awareness about the implications of taking in that third party dependency the moment they do it.
      • SahAssar 1473 days ago
        > Raising awareness about the implications of taking in that third party dependency the moment they do it.

        Sure, but if that was the goal then I think it should show that a package has "578 subdependencies with 935 authors that have had 7681 vulns that took an average of 67 days to be resolved. No currently active vulns (that we know of...) though!"

        Currently I think it more encourages shifting the blame to the tool and I think some engineers would say something along the lines of "well, it didn't show me any vulns when I installed the package, so it should be fine, right?"

        Taking on deep or complex dependencies should be treated as code that needs to be maintained in the long run, not as a one-time decision.

        When a vuln appears in either a PR or in released code the checks in CI/CD should alert and refuse to build that release or approve that PR.

        • bastijn 1473 days ago
          Of course, yet it starts with awareness. Would love if the extensions would also show the number of child's, number of authors, if it has an active community, the license, and more info that you use to make the decision. Then again, we would also need the licenses of the children, their activity etc. It's a zoo out there these days. Packages that are licensed MIT have children that are GPL licensed. One package dragging in 200 more, leading to famous left-shift debacles and maintenance hell. Meanwhile support periods are being lowered to 1-3 years, even on big projects.

          Another issue we run into is different libraries using different packages which depend on the same package down the line but a different version which do not work together.

          So yes, I wholeheartedly agree that one must think before taking in a dependency. Especially if you just took it because a nice stackoverflow.com answer said so and you are going to use 1% of the package.

          Our build already immediately flags SOUP that is brought in but not assessed yet. We had it failing for a while but that was too much of a barrier for engineers. CI/CD scans for vulnerabilities using tools like whitesource or blackduck. Tools like fortify and sonarqube run amongst many more.

          All leading to better code, definitely. Yet, at a cost. Haven't found the best set of 'rules' yet what to allow and what a practical choice is today. Keeping velocity at the right quality.

    • L84Dinner 1470 days ago
      1 - this is totally true. The problem here is it's excessively hard to tell someone what their risk profile is. Vulns kinda run a wild gamut, some occuring because you use a vulnerable method, but others because the library doesn't handle crypto right, uses an easy to guess default password, etc...

      Tooling around this has been hard to nail down, for a lot of those reasons. Some vendors are making tooling that will only report if you are using the "vulnerable method", but that has to be taken with a bit of caution as well, on account of it can't be 100% accurate.

      2 - yeah, sort of. I think the tools do a really bad job as of today of going "well am I actually using this when it's deployed?". A lot of the vulnerabilities in a JS project are buried in devDependencies (which can be totally bad, imagine a bad actor of a library that has a bitcoin miner and now it's in your CI/CD process). Tooling that kinda tailors towards your "deployed" risk profile I think personally is more helpful, because it's about reducing the noise down to what matters. This can be hard to do in certain ecosystems, because for example in Python it's difficult to discern what a library is used for, but in JS, etc... it's a bit easier.

      3 - agree 100% here, which is why overall project hygiene is probably the best indicator to if you would want to do that or not. Hygiene is something we should be using at the front end of selection of libraries in order to know that as time moves, we can trust updating, because the library is being managed in a responsible manner.

      4 - ambulance chasing, more or less. If everything is an emergency than nothing is an emergency, either. Agree with you there, and I think the tone that has been set by companies in this space is not exactly right, but it originates from you know, trying to sell their stuff. Those posts likely resonate with AppSec people more than Developers, hence why they keep making them. That's just a guess however.

      Full disclosure, I work at Sonatype, and I work on a lot of the free tooling related to security scanning. Don't like how it is all playing out? Come work with us. Our tooling is open source, and we truly want to help developers out.

      Couple projects I think people would benefit from, and are community driven/participated in:

      Golang scanning: https://github.com/sonatype-nexus-community/nancy

      JavaScript scanning: https://github.com/sonatype-nexus-community/auditjs

      RubyGems scanning: https://github.com/sonatype-nexus-community/chelsea

      Conda (and soon PyPI) scanning: https://github.com/sonatype-nexus-community/jake

      R (CRAN) scanning: https://github.com/sonatype-nexus-community/oysteR

      We try and write these native to each ecosystem so that developers can work with us on them. Also we have them setup so you can use them anonymously by default, we have to eat sandwiches (or whatever your chosen food is) as well, but we try and provide tooling that all can benefit from. Come help us if you want!

  • dmix 1473 days ago
    This made me curious what type of vulnerabilities were in something like Lodash and the answer was mostly prototype overwriting:

    https://snyk.io/vuln/npm:lodash

    It’s nice to see the Snyk project contributing patches back into major open source projects, not just highlighting them. Even if the threat model is relatively mild for an average dev like myself working in a controlled environment, despite the fact the vuln gets marked as severe (don’t get me wrong, it is still important for the larger OSS projects that are widely deployed to a variety of environments like lodash).

    I’ve become far more careful updating JS libraries as most frontend projects have thousands in their package.json and I’ve had countless deployment issues getting ground down once the dependencies try to update in production, even though they worked fine locally. We have protocols to catch stuff like that but it’s still a headache that often takes far more effort than it’s worth to have a few patches (same with keeping a Docker clone of production locally which is another nest of problems). So I try to keep updates to dependencies like that contained in their own commits which can be rolled back, and not part of other feature branches or fixes.

    I don’t have this problem in other languages nearly as much as JS using NPM/yarn. So I tend to be far less eager to run ‘npm audit’ than I used to.

  • cjonas 1473 days ago
    I wish synk (and npm audit) would do a better job of providing simple explanations to when these vurnabilities actually pose a threat. It's impossible for devs to keep up with the constant flood of vurnabilities. Does it matter if a dependency that I'm use in my build tooling is open to reDos? I honestly do not know because I don't have time to study each of these issue. Instead I just try to upgrade as often as possible, but maybe not as often as I should if I knew there were actually security holes in my runtime application
    • danenania 1473 days ago
      Yeah, somewhere in the reporting/auditing workflow, there should probably be a flag distinguishing whether vulnerabilities are actually dangerous in a devDependencies context (they rarely are). I find that jest and webpack especially generate tons of these irrelevant "vulnerabilities" and it's really annoying. It's a security issue too since it leads people to start ignoring the audit output.
      • L84Dinner 1470 days ago
        I work on this project, so I'm biased, for sure, but why not take a gander at:

        https://github.com/sonatype-nexus-community/auditjs

        By default we exclude devDeps (allowing you to run that later). The project functions off EXACTLY what you have installed in your node_modules, so it's a pretty accurate look at your actual risk profile. You can run it with --dev to get a look at your dev Deps, but we tailored it by default to your "prod risk profile". The notes y'all make are pretty much why. Every Dev Dep in the world seems to use lodash, so yeah, we get it, it can be prototype polluted, tell me what my actual risk is!

        • danenania 1469 days ago
          Nice, will check it out!
  • saadalem 1473 days ago
    This is actually something cool ! Good job !
    • lirantal 1473 days ago
      ooh thanks! would be really happy to capture any extra feedback if you have ideas for what to improve
  • pojntfx 1473 days ago
    Awesome! I've moved away from JS to Go recently, but this is sure really useful for frontend stuff.
  • axiosgunnar 1473 days ago
    Is Snyk a YC company or why is so much content marketing spam by that company tolerated here?
    • danpalmer 1473 days ago
      This is just good regular marketing from a company who happen to market to developers. If it had no value then it would be fine to complain, but they’re giving away a free tool, it’s definitely worth something. I’ve yet to see a post from Snyk that wasn’t at least “not bad”, and plenty have been much better than that.

      This is no different to GitLab’s marketing - huge release announcements every month, or Cloudflare’s technical blog posts, but I don’t see complaints about them.

      • lirantal 1473 days ago
        @danpalmer appreciate that. If and when you try it out and have any feedback I'd be happy to capture that and see how it works with our plans for it.
        • danpalmer 1472 days ago
          My only feedback would be that I'm not a server-side JS developer, so I don't feel it will have that much signal to noise. It's JS only, and the only JS I do is frontend and a little tooling.

          In general, I find that for JS that basically gets compiled at build time and used statically on a site, or things like linters that don't do any networking and are run only in dev/CI, the scope for vulns is pretty low. It's mostly just self-denial-of-service, because it's standard practice to not trust the client for anything anyway, so all the canonical security is server-side.

          There's obviously a vector for packages that have been backdoored and taken over by an attacker, but I'm not sure having that in my editor would help because I'd have to have the file open to see it, vs getting a notification about it from GitHub security (or Snyk's service I guess?). For new packages I'm adding, I would be researching it on NPM (for JS) anyway.

          I mostly write backend Python, and I'd probably use it for that. The fact that there's a more useful notion of a vuln means I want to see them more (it's more of a spectrum rather than safe vs backdoored in the frontend case).

  • z3t4 1473 days ago
    Terminals have a HUD problem. Too much information and it will distract the real issues. Too little info and you might miss something.