Some of that shifts with syntax highlighting: drawing the eyes to which bits are more specifically related. The arrow => should highlight as a single operator (just as >= does), for instance. Most people call these "arrow functions" so thinking of "=>" as the arrow operator is a habit that quickly builds.
They are controversial and everyone has different opinions, but I also think this is where programming ligatures come in extremely handy. When => looks more like ⇒ and is even more obviously an arrow, I think that also starts to make it easier to visually "parse" the flow of code like that.
For what it is worth, arrow functions weren't added solely for compactness, but also to fix some historic issues with classic function syntax. (Lexically scoped `this` versus caller scoped `this` being the big one.) A new type of function syntax was desired anyway for those reasons, and the compact syntax was the icing on the cake.
I haven't had the pleasure of teaching such things to students at this point in my career, but I have given a hand to many a junior developer to grasping some of these ideas and I don't think it is that tough, though it can be a shock/surprise if the last JS you touched was many years before. Especially if you are trying to also learn what JSX and/or TSX add on top of all the changes in ES2015+.
If you work with JSX daily, which 99% of React developers do, this is not hard to grok. It's a very typical functional composition pattern in React. Since this framework is targeted at those who want to use React, and presumably already use it, I don't see the issue. It's a snippet to show how Reactivated lets you use the React ecosystem, not how to use React - that's what the docs are for.
Extended code reads like a narrative journalism. “It was an end to a long dark night and trees shyly stayed in a fog when options in a full entirety of their generous content got passed into a pure functional instance of a select tag”.
I haven't used this particular framework, but having type safety in React through writing TypeScript, generating bindings to the backend so those are typed as well.
There's also a _ton_ of preexisting React components you can generally drop in and use, which is less true these days with something like a Django template.
You also have the option of doing SSR and then doing more dynamic stuff on the client side, as a sort of optimization (and plain better user experience than making _more_ server requests to get the initial page state loaded).
Thanks for taking the time to reply, but I'm still not really understanding the benefit.
Type safety is incredibly important if you're lots of logic in a language, which is why TS is great for SPAs. Does the type safety of TS get you anything if you're just doing SSR with not a whole lot of logic in TS? All of your application logic will be on the Python side of things.
>You also have the option of doing SSR and then doing more dynamic stuff on the client side
Isn't this just the old-school way of doing things before SPAs came around? i.e you render the page on the server and then add dynamics features using JS. I think the new way of doing this is with htmx, hotwire, etc.
I think the biggest benefit of SSR is you get the first page fully rendered out (good for SEO), and beyond that page the react SPA takes over by doing all the cool client side stuff like routing and what-not.
The biggest benefit for SSR in my opinion is SEO + first load is fast because its already generated. After that though, its just the plain old SPA experience.
It's because React (and other SPA technologies that also happen to work with SSR) is all the buzz. It doesn't actually necessarily make sense. The risk to a project is usually NOT the technology chosen for frontend.
Django templates are perfectly fine as long as you leverage template tags the way they were intended.
Oh wow, this looks really good. I want to say, what I was really hoping to see is a "how does this work?" section. I'll read the source code, but it would be nice to have a quick narrative explanation.
EDIT: Looks like the "Concepts" page has what I am looking for. I would add some of that to the front page.
Theoretically, the Dockerfile should "just work"™ with Render.com as well. Right now I focus on fly.io only because their free tier offers PostgreSQL without time limits. Render, I believe, only does so for a period of time.
One minor point on app names. They are really hard/annoying to change once you have production data, because your DB tables will always be appname_modelname, and renaming tables is a real PITA. You can set the model to point to a nonstandard table, but then your project has a confusing non-Djangonic wart that will annoy you forever more. For this reason I think ‘core’ is a safer option, unless you are certain what your app should be named. Naming after one domain model is likely to be too specific. (I strongly recommend against multiple apps until you have functionality you need to commonize between projects, as refactoring models between apps is a nightmare if you get the boundaries wrong.)
Regarding apps, I was just thinking you could create a directory structure like this, essentially eschewing the whole concept of Django "apps":
__init__.py # Import all model classes (necessary for Django to find them)
some_model.py # Define SomeModel class
some_view.py # Views for SomeModel
settings.py # Add "<package>" to INSTALLED_APPS
You still need to register the top level package as an app for Django to find things, but then you don't have to deal with Django's concept of apps beyond that. All your tables will be named <package>_<model> by default, which seems nice.
If it turns out later you need resusable "apps" for some reason, you could always add an apps/ subpackage for them.
I haven't tried this in a real project, so I don't know if there are any downsides, but it seems like a decent approach.
Yeah I've played around with this approach in the past while prototyping service boilerplates, I think it's viable. I never got it polished enough to publish as a startproject --template and ultimately went with Flask for microservices, so I didn't finish the prototype.
There are a few different places where little things break and then you need to use an obscure settings.py variable to fix them, which makes me a little nervous since it will cause a little cognitive friction for Django-fluent developers joining your project. But I do think it's worth experimenting with.
> For this reason I think ‘core’ is a safer option, unless you are certain what your app should be named.
Yep, this is what I do. It’s rare that I make anything that can be shared cross-app (most Django apps I make are super simple CRUD-types), so every one of my Django apps has a ‘core’. I also recommend this approach.
I would add to this list: convert your template engine to Jinja2. I held out for a long time on this, and now I would not go back. The purist approach ("keep logic out of templates") sounds good, but in practice leads to spaghetti hacks with weird additional template tags to do simple things, a bunch of additional context variables with no meaning, etc.
About the purist view on Django: The django templates don't annoy me that much, but what is really annoying is the avoidance of using thread locals in Django. This makes very difficult stuff that would be trivial in other frameworks (ie flask), like getting the current user.
As another hold-out on Jinja, what pushed you over the edge? Half of the appeal to me in Django is that if you follow the easy path, one project should look like another. Breaking conventions for a few template niceties has not seemed worth it (although, I am not a front end guy, and survive on very barebones presentation).
Using Tailwind without a frontend js framework. I needed a way to create lightweight “components” ala React but on the backend, because otherwise you’re either copy pasting classes like mad or (ab)using tailwind’s @apply.
I've been gradually removing all use of Jinja macros from our projects as they make debugging so much more painful. Everything now goes in template globals, which lets you debug them the same as anything else, gives far clearer stack traces and lets you do clever things with caching.
Straightforward, useful advice. I pretty much do all of these when I start new Django projects.
The need for a custom User model makes me a little sad every time because, well, single `name` fields (rather than `first_name`, `last_name`) and email-for-username do feel to me like more sensible defaults circa 2022.
Maybe for consumer apps, but for a lot of enterprise apps that have to be compatible with existing APIs or domain-specific data standard that require two name fields, that’s not really a viable approach.
I used to build everything with one giant “monoapp” and completely ignore that aspect of Django, which worked pretty well. Eventually though you need to split up your models into multiple files, and then your views, and admin, and so on, and it turns out there’s a lot of benefits of just segregating things into “apps” from the beginning. The only real downside I’ve noticed in practice is the omni-file-search feature in my IDE is harder to use, because you now have 7 “models.py” and it’s a bit harder to quickly navigate to the right one. If you split up files without using apps (just normal Python imports), you don't have the same problem with all the names being duplicated.
The pitch that you can "reuse" apps across projects always seemed weird to me, because that's basically never something you actually want to do/can do easily without major surgery.
Protip on separating your models etc out: use modules instead of apps. so instead of a models.py have a models folder with a structure like this
You can then put all your cheese related models in cheese.py, and all your spam related models in spam.py. Then Simply import those models from __init__.py and django is none the wiser that anything changes. Tada: organized models without having to dive into the broken mess that is apps.
The same trick of course applies to any python file you want to split up. Views, urls, managers, etc etc.
Yeah that works pretty well too. I think there’s some minor advantages of having them in apps, they’re sorted neatly in the admin, you’re forced to split views, models, templates, admin, etc together which keeps things a bit cleaner. But it’s also messier in other ways (you have to sift through too many files to find the thing you want).
Migrations that cross app boundaries are a nightmare. Before you know it there's thousands of migrations in your various apps, and squash is powerless to help you. It'll pretend to work but break your migrations if you try to squash across any migration that depends on a migration in a different app.
IMO migrations in combinations with apps are fully broken in django. it's unworkable for large projects.
Shameless plug, I've given a talk about scaling Django codebases to many apps at PyCon (2021) and PyCon UK (2017). At my previous job we had around 500 apps in our codebase and it was honestly great to work with.
I will sign onto this approach as well. Never expanded beyond a `core` app, which may have duplicated modules as the project grows (models_foo, models_bar, views_foo, views_bar, etc). I have never felt that any one component of the project was isolated enough from the rest of everything that it made sense to firewall off pieces into apps.
Say you want to easily create a comments app or ratings that you can associate with any other model in your project. Write that app once and use in many places. Or maybe something that sucks to write that may be error-prone, like auth, especially social auth which may change frequently. Maintaining one app and sharing it is way better than everyone trying to keep up with the changes themselves.
Can't you just keep the logic for your comments app in the monoapp and use it with everything?
I understand if you want to distribute your app for others to incorporate, but otherwise it felt like the documentation overemphasizes the "app" aspect. It's a reasonable structure but more of a guideline... unless you want to reuse in other codebases
The name suck, but the idea is they are plugguable: check the django packages site for an idea of the benefit of that. It's essentially a plugin system with autoloading of some resources, such as the model and templates.
Also, apps override each other in the order or import. E.g one app can override the template of another just by using the same name. This means you can plug AND extend.
Things this does:
- README.md: setup readme with development setup
- Django split settings: split settings for local, testing and production
- Split requirements: split requirements.txt for local and production
- Pre-commit hooks: setup pre-commit hooks for black and pyflakes
- django-envoiron: database config and secrets in environment
- editorconfig: sensible tab/space defaults for html, js and python files
- remote-setup: setup hosting on uberspace
- git push deployment: `git push live` makes the changes live
- github actions for tests: run tests automatically on Github
One thing I never figured out was dependency management. Some others are linking to two scoops of django and other starter templates, it seems like none of them get past a kludgy set or requirements.txt files. I want something easy to use like ruby bundler or npm. I want to be able to specify my dependencies and install them in one command, and specify if something is production or local in that same command.
I use pip-tools with venvs for that and can‘t complain. It pins dependencies and you can create different requirements.txts for local dev and production (production for me is usually dev minus some debugging tools). It‘s not particularly fancy, but it gets the job done.
I used Django, and personally I don't like Django startaproject command that not allow us to create project that contains dash "-". Also default appname has the same name with the project name can confusing the new comer in Django.
I always structures my project like this when start my Django project.
Depends of the size of your setup. Most website are small, and their threat model don't require more than having the secrets in a systemd file env statement.
After all, if you have a single server, and the attacker can read a root protected file oe the spawned processes context, you are pwned already. As for exposing env var, popping os.environ is usually enough.
No need to pay for more than you must: bots and script kiddies are not mossad.
I've heard this advice a number of times, but often run up against otherwise standard looking systems that rely on secrets in environment variables -- mainly thinking about AWS's requirement when using Secrets Manager with ECS; the secrets are stored securely, but ultimately loaded into a containers environment.
Even this article, which recommends keeping your secrets in environment variables, tells you to implement that by storing them in the filesystem. The advice isn't to avoid storing your secrets in the filesystem; it's to avoid storing them in version control.
It is apparently an exercise for the reader to figure out why it's better to read your secrets out of the environment, which read them from a file you provided, than to read them from your own file yourself.
Linux already stores plenty of secrets in files in /etc. Just do the same: the root protected init file for your app likely has a mechanism for passing an env var to the new process. Systemd and supervisor do. Then pop os.environ instead of reading it, and you are safe from bots and script kiddies, which is likely your threat model
This has the same problem that OP refers to with regards to DEBUG: It remains in the environ, and for example if you forgot to set DEBUG=False, Django will just dump the whole environ on the error page.
If you can call bitwarden from within the python code somehow, such as with subprocess, that would be better.
You need your app to have the bitwarden decryption passphrase to read it. Do you intent to enter that manually if the server reboots ?
If not, then it's a secret you must manage, and you are back to square one: either using a simpler solution, use a hardware key, or going full secure vault service, with privileged first request, and so on.
Theoretically you could write a process that gets its secret from the first request, which is treated as privileged. Then the app can have it in memory, and it's only as vulnerable as the app itself. (This is a kind of degenerate form of the "stem cell" pattern for processes, like Erlang's gen_server.) And yes, this kicks the can to another system, but presumably the secret has to be durable somewhere. Although perhaps a sufficiently sophisticated (or broken?) cluster the nodes could keep passing the secret to each other, never touching disk.
I’ve heard that tmpfs is a good solution. A script on boot loads the secrets from something like vault. Env can be exposed through docker inspect if you’re on a shared host. Not sure what other negatives to env there are though.