TL;DR: You're expected to still use the bits as access/dirty flags, but instead of the hardware automagically setting them, you set them yourself from the page fault handler. (Or if you don't need to track those things, you just set the flags to 1 from the start.)
> Just because we wrote 8 in the MMU mode, we don't quite have the MMU turned on. The reason is because we're in CPU mode #3, which is the machine mode. So, we need to switch to "supervisor"
I'd encourage him to start in Supervisor mode instead and use one of the standard bootloaders (bbl or openSBI) to deal with machine mode. This would be compatible with Linux and be more portable as different platforms will supply platform specific boot loaders.
Also missing from this tutorial - one really needs to execute a sfence.vma instruction after writing satp to sync CPU pipelines to new memory management state (otherwise code that happily runs on a simulator will fail on a real-world deeply pipelined CPU)
Patience, my friend. We're not quite there yet with this tutorial. We don't need sfence.vma for what I've covered here. Plus, there are about four ways to use the sfence.vma instruction (whether or not rs1 is zero or not and if rs2 is zero or not). I'd like to have some context behind each one of these four cases, but I can't do that in the OS's current state.
RISC-V is popular because it's a clean RISC design (unlike IAxx) which is not controlled by any for-profit company (unlike Arm). That means anybody can use it, enhance it, or fix any not-yet-identified bugs in it without paying license fees.
Thanks, I gathered as much about RISCV, but my question was maybe more basic, kind of eli5 of why would someone would use RISC for an application over x86 or ARM, what level of task requires or benefits from a reduced instruction set chip. I noticed alot of video games have used it.
If you're "using RISC", that means that you're producing something that involves a processor. If you're just making software you just need to have it compile with common compilers and have the relevant libraries available on your target platform (ok, I'm over-simplifying), so you don't "use processoer X over Y".
When you do use hardware, you're interested in things like power budget and price for comparable performance. RISC-V is now reaching competitiveness in some cases, and companies/people will now start choosing to use it on the merits. If, there is, there's an OS available and a bunch of supporting tools and libraries.
If you're designing a new computer platform, do you care about power (i.e. does it need to run on batteries)? If so you probably want a RISC architecture rather than CISC (although the Intel Atom tries hard to be a low-power CISC machine). That leaves you with two choices: Arm (used in all smartphones, some Chromebooks, and Raspberry Pi) or RISC-V. Arm is more mature and there's a lot more software available, but it's not an open source architecture.
There are recently some guides about how to build an operating system in Rust. Rust is innovative and great, but also brings complex new ideas. Can it slow down learning and implementing because the developer have to dedicate too much time to the language itself?
I believe that building a full blown OS requires dedicating so much time that this overhead due to the language itself will not be significant. On the other hand, without the sufficient experience with the language itself, it's easy to run into problems that will consume a significant amount of time
sigh - the PTE A&D bits cause traps they way they do so that software can implement them (if it needs them, and not if it doesn't), it's an old trick and very much a RISC thing - not wanting to burden the hardware with something that software can do
Vax's had modified bits, 68030s had both (dirty managed in software), 80x896s have both [an 'unnamed' risc architecture (I worked on) had both, managed in software]
The big advantage of handling this stuff in software is that any hardware doing this has to go through the system caches which is just really messy - better to use the existing coherent paths already provided and provide hardware support so you don't have to do this often
accessed and dirty (where a page has been accessed and whether it has been changed)
You need to know "dirty" to know whether a page needs to be written back to a swap file (or needs to be duplicated in lazy copying cases), you may want to know "accessed" if you want to do the accounting to push out less often used pages when memory gets short