Talk:End-user Computer Security/Main content/Software based

Improvements/additions for "Sandboxing and cloud computing" section
Perhaps mention that sand-boxing might work well for you if the following condition is met:
 * 1) any malicious modification of the user files you use in such computing, is automatically tamper-evident.

This might be the case when doing certain graphics work. Perhaps examining the produced graphics files is enough of a quality-control mechanism, such that we don't need to worry about malware and the like so long as the produced files look okay?

Can additionally mention that cloud computing might be good for you if in addition to the just-mentioned sand-boxing condition applying (cloud computing in some sense is also a kind of sand-boxing), the following condition is also met:
 * 1) whether or not the files are stolen is of no concern to you.

In some cases of cloud computing, you may have faith that the software functions as advertised, but be unsure as to whether your user files may be stolen. In such cases, the first sand-boxing condition above, perhaps can be ignored.

More broadly, safe and unsafe systems can be used together, where the safe system can be used to verify the output/work of an unsafe system. Such would only be advantageous if the work of the safe-system verification and the unsafe-system work, cost less than simply doing the work on the safe system. Such might be the case where a user has access to extremely powerful computing resources that are considered to be unsafe, who then also has a safe, but not powerful, system that can be used for verification. The type of work is relevant. For certain things, like perhaps 3D-rendering a scene, perhaps verification must take on the aspect of simply performing the work a second time on the safe system and then comparing for correctness. Writing articles might also belong to this class of activity. Bitcoin mining on the other hand, probably is computationally expensive to do, but cheap to verify, so perhaps such a 'safe-unsafe systems' set-up would work for such mining.



Part of the security risk in preinstalled software is....
Part of the security risk in preinstalled software is that it isn't shrink-wrapped and has no holographic security seal? Should such thoughts be incorporated into the text?

MarkJFernandes (discuss • contribs) 08:06, 13 May 2020 (UTC)

Origin of the idea behind the "Malicious sneaky replacement of FDE system with historic clone of system ... " attack
I initially thought that this attack was described by Trammell Hudson in his 2016 33c3 talk, hosted at https://media.ccc.de/v/33c3-8314-bootstraping_a_slightly_more_secure_laptop. But when trying to find the relevant part of the talk later on, I found I couldn't find it. Class of attack probably has a name specially designated to it by the security community.

MarkJFernandes (discuss • contribs) 13:59, 21 May 2020 (UTC)

Mention 'www.offidocs.com" & "www.onworks.net" in "Sandboxing and cloud computing" section?
Can make specific mention of https://www.offidocs.com and https://www.onworks.net that provide many very powerful and useful free cloud-based software (under 'easy' software licences) free of charge (including Linux installations).

Add info about ReactOS, to §"Which OS?"❓
The ReactOS operating system, is an alternative way to run Windows programs when compared with WINE over Linux, that is either more secure than Windows, or constitutes a path to more security when compared with Windows. See https://reactos.org/wiki/ReactOS#Secure and https://reactos.org/forum/viewtopic.php?t=17226 for more info. Windows 10 is in the Windows NT family.

Catacomb's note about Tails Linux working on reproducible builds
Qubes-user "Catacombs"'s note (paraphrased by MarkJFernandes) :


 * "Tails Linux is working on reproducible builds, but it isn't yet implemented. Instead, Tails Linux's current verification scheme is by a Firefox add-on extension. It works by verifying that the file I downloaded of the Tails Linux OS is the one that matches the image signatures provided by the extension. This puts trust in the Firefox system, and in a connected way in the HTTPS system (to the extent of deeming the HTTPS system as being infallible). My thoughts are that we could generate an additional encryption layer on top of the HTTPS system, for items requiring greater security than simply HTTPS alone. The added layer would have more sophisticated encryption than the HTTPS system, and would use another set of security cryptographic certificates (other than the TLS certificates that HTTPS uses). Using some kind of encrypted token might be an idea, where only those users possessing the token are able to pass through the security."

MarkJFernandes 's current response in respect of having an additional encryption layer:
 * "Hmmm. I think that usernames and passwords already add the second-level of security you're outlining (unless I'm misunderstanding you). As for encrypted token, two-factor authentication and two-step authentication probably effectively facilitate such second factors. Such authentication is dealt with in the book here ."

Catacombs's note about what is perhaps Catacombs's most secure laptop/tablet/smartphone
Qubes-user "Catacombs"'s note (paraphrased by MarkJFernandes) :


 * "Curiously, I bought an old Android device, and then used the MrChromebox.tech script to put coreboot/SeaBIOS on it to enable me to boot Linux on the device. Now if I boot Tails Linux on the device, rebooting each time I have a different computing purpose in mind, it is perhaps the most secure computing device I have, although I do worry about having to trust Google won't find a way to feed all of my internet typing actions back to their servers."

MarkJFernandes 's current response to note:


 * "This note can perhaps be integrated into the main content, but it might be best to build-up some more information on these issues, before doing any such integration. It's important to add insights to the book, that are gathered from the practice of security concepts."

Add mention of Puppy Linux to "Which OS?" section?
The Qubes-user "Catacombs" has highlighted Puppy Linux amongst just a few operating systems also mentioned by "Catacombs", as particularly providing certain security features, features that "Catacombs" appears to imply are distinct to a certain extent, and that are not present in Qubes.

"Catacombs"'s thoughts on Puppy Linux (paraphrased and with some elaboration, by MarkJFernandes) :


 * "Puppy Linux users seem to think that what they call a multi-save optical disc, is a highly secure way to work. What they do, is they re-install the Puppy Linux operating system for each user session (even if it means re-installing an old version of the Puppy Linux OS absent of the latest Puppy Linux updates). In some ways, this is similar to the Qubes OS, in that in Qubes OS, a temporary VM is destroyed after completion of the specific use case for which the VM was created (and always by the time of reboot as well as by the time of shutdown, of the computer). With Puppy Linux the user can choose not to save any information after user sessions, which means that session-to-session use can be completely non-preserving of state. Since Puppy Linux is completely loaded into RAM for each session (without it being installed to any of the local drives), it is slow to boot, but it does run fast. The saves on the optical disc (CD or DVD) can have additional programs, program upgrades, and the user's personal files. During a user session, a user can opt not to save to their multi-save optical disc; they might choose this if they suspect the session may have been compromised in some way.


 * Puppy Linux works without requiring the distinction of the root user as being set apart for system actions, operations and procedures normally segregated due to their increased risks to OS integrity. Users feel this is just fine, as one gets a new copy of the OS with every boot."


 * It used to be that all of Puppy Linux could be started with a video-display option, where the work of the display driver would be carried out by the main processor (rather than by video chip[s] and graphics card[s]). Yes, it's true, that most of the display drivers are available for the various video chips and graphics cards around, but such driver bypassing prevents drivers from doing things considered to be anti-secure and anti-private: it makes the system more secure. The same measure could be implemented in Qubes, but then who wants a slower Qubes?"

MarkJFernandes 's thoughts on this:
 * "   Related to:
 * ‣ optical-disc info in "Conventional laptops" subsection of the "Factory resets" section.


 * ‣ security advantages outlined in the "Rewritable media vs optical ROM discs" section.


 * ‣ following excerpt from §"Regarding operating system":


 * Some general security advice in relation to using an operating system, is for users to have an administrator account that is different to the standard account users use for everyday computing. The standard account has fewer privileges. The more powerful administrator account, that also has higher associated security risks, should also have a “minimised” exposure to risk due to it being used only when needed—when not needed, the less risky standard account is instead used.



Probably a section on internet-security software and anti-malware software should be added as a gold-coloured-heading section to this chapter...
Examples of such software: Little Snitch; WireShark; Norton Internet Security; McAfee anti-virus software.

Add section called "Communication software" to this chapter?
Email is well known as not being a secure method of communication. This can be documented in a new section added to this chapter, called "Communication software". The section can mention how email can be made more secure with PGP encryption and signing. The section can then go on to mention about the different software available that offers end-to-end encryption of communications (such as Skype). Mention can also be made about how insecure mobile and telephone networks appear to be (because intermediate call centres apparently can listen-in on such communications).

Is there a security principle of "software-less hardware", and if so, should it be added...?
I'm currently working in my ideas on the idea of a software-less computer system, that you purchase or establish. The system can come with software already loaded, but it then ought to be made software-less by wiping it clean of software. This includes not having software in the firmware, especially the BIOS/UEFI firmware. Once such a system is established, the user then downloads all the software they require (using their secure communications device, as described in the §⟪Regarding how to obtain software⟫), and they then proceed to install the software for the system. The user later on, can wipe the system to a clean state again, and reinstall afresh for security reasons. The system doesn't necessarily need to be placed in a "blank" state, but any software on it must be wiped off in the process of reinstalling the software for the system.

The reason why I'm thinking there is a security principle in this, is that it splits the issue of establishing a secure system into two distinct parts that appear to be able to be dealt with individually in effective ways for the purposes of establishing security. The hardware can be verified using a variety of verification methods, many of which are documented in the ⟪Broad Security Principles⟫ chapter under §⟪Measuring physical properties for authentication⟫ (including simple visual inspection). Hardware tampering is likely much more rare simply because of the nature of hardware when contrasted with software, and it is likely easier to detect than it is for software tampering. Because software tampering may be hard to detect, and easy for adversaries to do, it is probably a good idea simply to download all the software using a secure communications device. The §⟪Regarding how to obtain software⟫ provides general information on how to obtain software securely. Splitting the task into these two distinct activities, seems to constitute a security principle for the establishment of secure systems.

If such a security principle does indeed exist, then it may be worthwhile adding information about it to this book, perhaps to this chapter.

Considered whether BIOS firmware (and also other firmware) was perhaps mostly protected by both not allowing re-flashing, and also by insisting in the update process, that updates be cryptpographically signed with a private key only known to the vendors of the firmware software. Briefly researching this, it does appear that such is officially advised, in the form of NIST guidelines (see https://cts-labs.com/secure-firmware-update). However, because the `flashrom` software appears to be very widely supported by the different motherboards available, and because of the information here, it does appear that BIOS/UEFI vendors mostly don't implement such protocol (which is perhaps quite worrying).

The concept of "software-less hardware" is related to Joanna Rutkowska's paper "State considered harmful" (subtitle "A proposal for a stateless laptop") dated December 2015.

== There are other kinds of bootloaders other than BIOSes and UEFIs, as well as similar security threats based in other kinds of firmware (such as in the firmware chips of graphics cards) so perhaps material should be extended and generalised to cover....? ==

There are other kinds of bootloaders other than BIOSes and UEFIs, so perhaps material in this chapter should be generalised to cover also the other kinds of bootloaders. The Raspberry Pi is an example of a computing device that uses a bootloader that is neither a BIOS nor a UEFI.

Similarly malware in the BIOS/UEFI firmware, isn't the only firmware point of weakness in computer systems. You get firmware for all kinds of things, from disk drives, to network cards, to graphics cards, to memory sticks, and on and on. Malware can be in all these other firmware, and may use different microchips to the BIOS/UEFI firmware microchips. Probably the "Security of BIOS/UEFI firmware" section in this chapter should be extended and generalised to cover these other threats.

Raspberry Pi device can be used to flash the ROM chips on other devices (such as a laptop)
This is another advantage of the Raspberry Pi device in relation to using it as a device for secure downloading. The only additional things that are needed appear to be wires and a SOIC-8 pomona clip; these things appear to be mostly safe to use, in the sense that hidden hardware cannot mostly be hidden in them (not the case with microchips, for example). See here for info on how a Raspberry Pi can be used in this way. It would seem that this method, effectively turns the Pi device into a USB (flash) programmer, but perhaps unlike USB programmers, you can purchase the equipment securely, i.e. can thwart MITM attacks by picking a random unit from a shelf in a physical store&mdash;not so sure you can buy USB programmers in this way. Probably this should be added as one of the advantages in the ⟪Pros vs Cons⟫ section.

== Wherever the security advantage of the principle outlined in §⟪User randomly selecting unit from off physical shelves⟫ is mentioned... ==

Wherever the security advantage of the principle outlined in §⟪User randomly selecting unit from off physical shelves⟫ is mentioned, such as in this chapter in respect of smartphones, and then again in respect of the Raspberry Pi device, as well as in other places in the book, probably mention should also be made of the principle outlined in §⟪Ordering many units of same product⟫ especially when the item to be purchased is cheap. For example, ten Pi devices can be bought from the same store, and then nine random units returned, to ensure better that the one you have bought hasn't undergone any tampering. The security advantage derived from this second principle, seems to be significant.

Include implementing extra sandboxing for closed-source blobs, under the §⟪Sandboxing and cloud computing⟫?
Closed-source blobs, as pondered in the discussion under the Raspberry Pi forum topic "Secure computing using Raspberry Pi for business purposes", can be perceived as particular security concerns of a computer system. One potentially novel approach to dealing with them, is to reverse engineer them, and then implement extra sandboxing in-code on the extracted source code, to limit their potential harm. It seems that it is probably legal to do this under UK law, so long as it is done privately and the user isn't under a contract preventing him from doing so&mdash;see section 50C of the Copyright, Designs, and Patents Act 1988. Additionally, it might also be legal for such users to release the source code modifications (not the modified source code) in the form of a patch so long as the patch doesn't constitute an infringing "copy" of part or all of the closed-source blob, for others also to be able to patch their closed-source blobs in the same way (thus saving on the work done for implementing such sandboxing across the entire user-base). The sandboxing doesn't necessarily only need to take the form of code additions and code rewriting; it can also take the form of simply deleting portions of the closed-source source code, portions deemed unnecessary for some users, where leaving them in would only increase the attack surface and/or potential vulnerabilities in the closed-source blobs (see ⟪Do avoiding "bells and whistles", trying to be "barebones", and reducing power & capability, constitute a broad security principle?⟫ note for more about this).

The open-source software me_cleaner appears to implement this principle, by modifying the Intel ME closed-source firmware blob (a closed-source firmware that is controversial due to perceptions of it being a potential security vulnerability) to reduce its scope for inflicting or enabling damage to a user's computing activities.

JTAG interfaces (perhaps through a JTAG port) can possibly be leveraged to flash more easily firmware into ROM chips on systems that support JTAG
See https://en.wikipedia.org/wiki/JTAG#Storing_firmware. JTAG has been identified as a security risk because of such ability, but actually, it could in fact be an advantage. Being able to reinstall firmware is appearing to be a very good security precaution, and without JTAG this is perhaps more difficult especially when malware is already in the target firmware chips. The standard firmware upgrade utilities may not be capable of removing malware when it is already in the pre-existing firmware. In such cases, it is prudent to wipe clean the pre-existing firmware, to get rid of any pre-existing malware. The JTAG interface might more easily facilitate such wiping as well as the re-installation afterwards of genuine firmware code. Having unpluggable BIOS firmware ROM chips may not help, as if you swap out the existing chips with blank chips, the system without JTAG or a USB programmer, may be incapable of facilitating the re-installation of the firmware code&mdash;without a BIOS due to having only blank BIOS-firmware ROM chip(s) after swapping out the chip(s) , your computer system perhaps won't start nor get to the point where new firmware can be installed. An alternative to JTAG is to use a USB programmer where you "manually" wire-up the programmer to the pins of the ROM flash chips. However, such alternative may not be as easy as using any preexisting JTAG port, in consideration of the "manual" wiring that seems to be required when using USB programmers.

In light of these thoughts, it may be a good idea to use hardware that has a JTAG port.

Using firmware-chip sockets may be a good idea, for security reasons; mainboards with built-in mechanisms for 'properly' wiping pre-existing firmware stored on chips, may also be good
The Coreboot documentation indicates that if desoldering flash firmware chips for the purpose of installing Coreboot, it is recommended that the soldered-on chips be replaced with a flash socket that instead uses pluggable-and-unpluggable flash chips. Particular mainboards that "off-the-shelf" have such sockets and socketable chips, can be used to save on the work involved in doing such replacement yourself (the ASRock H81M-HDS, ASUS F2A85-M, and Foxconn D41S mainboards, all use socketable flash). From a security perspective, such socketable flash may be a good idea, in terms of being better able to ensure the integrity of the firmware. For example: you can go about creating several back-up firmware chips, that you securely store in different remote locations; if ever intrusion is detected, you can then simply replace your socketed firmware chip with one of your trusted backup chips. Without a socket, the alternative process may involve the labour of desoldering the present chips, and/or fiddling with a USB programmer at the "point in time" when intrusion is detected; with a socket, you can potentially do the work beforehand, and save on labour at the "point in time" when intrusion is detected.

Some mainboards have built-in mechanisms for 'properly' wiping pre-existing firmware stored on chips. This again may be good for security, and it doesn't seem that all systems have this facility. Some systems, appear only to have mechanisms in place for updating and upgrading pre-existing firmware code (rather than a proper wiping); unfortunately, if malware is already in the code needing updating, such mechanisms may not remove such pre-existing malware. Apparently, most Linaro boards have such mechanisms for properly wiping pre-existing firmware stored on chips&mdash;see here.

== Add info to §⟪Security of BIOS/UEFI firmware⟫ about write-protect physical switches potentially being useful for protecting firmware....? ==

Because firmware may be able to be altered during normal OS operation, or during boot time, by other software, it could be a good idea to employ write-protect physical switches to prevent such from happening. In my Chromebook C720, there is, for example, a write-protect screw ostensibly for making the firmware, or portions of it, read-only.

== Info for §⟪Security of BIOS/UEFI firmware⟫; might be easier to secure firmware on mobile devices, when compared with securing firmware on larger, more conventional kinds of computers ==

It might be easier to reinstall, in a proper way, the firmware for mobile devices (such as smartphones and tablets), than for other computing devices, due to it perhaps being the case that for mobile devices, all the firmware is usually located within a single ROM chip on such devices. With other computers (including laptops), there may be several different chips each containing its own separate firmware where malware may be present. I have personally found this to be the case with my Chromebook and my laptop: there's the network card firmware to consider, the firmware of the SSD or HDD also, the BIOS firmware also, the graphics card firmware also, etc.) Whilst such easy reinstallation is desirable, it perhaps means that the single firmware chip is also a more potent point of attack for adversaries (due to the highly integrated nature of the computer system that constitutes the mobile device). Have asked question concerning this paragraph @ https://security.stackexchange.com/q/244266/247109

It appears that there is a convenient mechanism for faithfully reinstalling the firmware of certain Lenovo tablets, such that pre-existing malware gets wiped, simply by installing via the tablet's USB socket with connection to another uncompromised computer. See https://androidmtk.com/download-lenovo-stock-rom-models.

Raspberry Pi used as a secure downloader perhaps doesn't have much of a disadvantage based in needing to acquire a secure VDU...
The con listed in the Wikibooks book regarding whether a Raspberry Pi can be used as a secure downloader, in respect of needing to acquire a secure VDU, perhaps isn't so much of a con. If you are just running the Raspberry Pi OS for downloading files over public HTTPS URLS to an SD card, probably little to worry about if someone is fiddling with your VDU images so long as you enter no confidential information. The same may also be true if after downloading such files, you are writing data images to removable media. The OS run on the Rasp Pi device can be hardened so that it is more resistant to attacks focused on meddling with VDU images.

== Re. §⟪How to ensure installed operating system is not compromised via an evil maid attack, during periods when machine is not meant to be on⟫ ==

It might be worthwhile being explicit that the risk of the hardware being or becoming compromised, also includes the risk of firmware modification. The distinction between software and hardware is somewhat blurry. So it might be worthwhile changing "the risk of the hardware being or becoming compromised is very low" to "the risk of the hardware and firmware being or becoming compromised is very low". Protecting the BIOS firmware might entail simple measures such as the use of a BIOS password. However, removing the CMOS battery would probably wipe the password (I would have thought). Using the Heads BIOS/UEFI boot firmware system such that the TPM is used to secure the firmware code, would probably be better for securing the BIOS/UEFI firmware.

Probably the often touted way for securing the installed operating system, is to use the Secure Boot protocol (where the system disk is locked to the particular BIOS/UEFI firmware by means of cryptographic signing). Documenting this as another method might be worthwhile. However, Secure Boot may not be all that secure. According to the document at http://www.c7zero.info/stuff/DEFCON22-BIOSAttacks.pdf, attacks do exist against 'Secure Boot'-enabled UEFI set-ups. In fact, it might be that Secure Boot is quite weak in some regards. In this respect, the advice of this Wikibooks book that the bootloader be physically secured separate from the computer system, seems likely still to hold true in spite of any perceived security benefits from 'Secure Boot'. More generally, basic security principles seem likely to provide much better security than certain 'technical computer wizardry'. A 'back to basics' security approach is perhaps needed, and perhaps "missing from the vocabulary" of people coming from a background strongly based in the technical side of computer things.