Talk:End-user Computer Security/Main content/Broad security principles

Adapt business/work model to lessen impact of threats in threat model
A new broad principle that can be added to this section, might be the adapting of an entity's business/work model to lessen the impact of threats in the entity's threat model. For example, in circumstances where intellectual property theft is rife, you could change your business model so that you are not so dependent on intellectual property protection. This happens in the open-source community, where the business model is not so reliant on protecting intellectual property; instead, revenues are generated in ways probably mostly immune to attacks based on stealing intellectual property.

--MarkJFernandes (discuss • contribs) 13:56, 17 April 2020 (UTC)



Mistakes in "Security measure of taking key out of self-locking padlock" photos
Mistakes were made in the taking of these photographs. The padlock should appear to be unlocked, and also ideally, probably the box should be opened (rather than closed).

MarkJFernandes (discuss • contribs) 14:11, 21 May 2020 (UTC)

Relationship between “Destroy key when attacked” principle and military strategies
I've read through the list of military strategy and concepts at https://en.wikipedia.org/wiki/List_of_military_strategies_and_concepts, and also done some brief internet research, but can't find this principle distinguished anywhere. It is similar to 'scorched earth' policies but not quite the same because there's no retreating or advancing.

MarkJFernandes (discuss • contribs) 14:13, 21 May 2020 (UTC)

Add "Ward off criminals by being public about your security" as a broad security principle?
Criminals can be warded off when they believe you have good security in place, especially when they believe they might be caught by the police because of your security. Should this be added to this chapter as a broad security principle? Maybe it's not broad enough and should be instead put in the "Miscellaneous notes" chapter. Or otherwise maybe it just isn't significant enough to be in the book at all.

It is quite related to the "Publishing security methods" broad principle, and perhaps should be mentioned in the documentation of that principle.

"Security by screaming" as broad security principle?
Perhaps there is a broad security principle that can be labelled as "security by screaming". Essentially, more security is attained by proclaiming to the world, almost in a screaming-like way, about the awfulness of your security compromises. In such fashion, attackers may be warded off from fear of being found out, possibly because of increased attention paid in their direction.

== Add "Security through obscurity" reference to §"Publishing security methods" ==

"Security through obscurity" contrasts the "Publishing security methods" broad security principle. The Wikipedia page for "Security through obscurity" gives justification for why publishing security methods is likely better.

Add "Security layers with differing credentials, for improved security of more valued assets" as broad security principle?
Adding this was an idea borne out of initial discussion of this book with the Qubes user "Catacombs".

Their idea was possibly to include information on "..nested encrypted folders...". They said that the idea was something like that after the hard drive's full-disk encryption, there would be a second layer of encryption for all of their highly-private letters by using an encrypted-folder mechanism (distinct from the full-disk encryption). The idea would be that a user would enter their credentials initially to get access to the operating system which would decrypt the full-disk encryption, but would not decrypt the encrypted folder. In order to get access to the encrypted folder, the user would have to enter a second set of credentials (perhaps a second password). In such ways, would there be something like access control to a building and then greater access control to a highly confidential room in that building. It was put forward by Catacombs that leaving "...information openly in the file structure..." was not safe. They further implied that such a security mechanism overcame such security weakness connected to not having strong security for information more sensitive than the average information on your system.

Catacombs said that in 2009, they had a MacBook Pro which had the ability to create a software-driven encrypted partition inside of the main file structure. They further added that reviewers felt the encryption used back then was quite good.

I suggested that a more general concept of "encrypted container within encrypted container" might be appropriately added to the book. I went further and said that perhaps an even more general concept of "encryption within encryption" should instead be added. Upon reflection of this latter general concept of "encryption within encryption", I realised that an even more general concept existed that included the physical security mentioned by way of analogy in the building-and-room analogy above. I've labelled this concept as "security layers with differing credentials, for improved security of more valued assets" because I could not find it singly labelled elsewhere. The concept is perhaps related to User Account Control (UAC) which is actually touched upon earlier in this book, in the "Regarding operating system" section, in the following excerpt:

 "Some general security advice in relation to using an operating system, is for users to have an administrator account that is different to the standard account users use for everyday computing. The standard account has fewer privileges. The more powerful administrator account, that also has higher associated security risks, should also have a “minimised” exposure to risk due to it being used only when needed—when not needed, the less risky standard account is instead used."

It is also related to the concepts of security clearance and multi-level security but is not the same as either of these concepts.

== Considered whether high-latency email should be mentioned in the "Time based" broad-security-principles section, or elsewhere in book... ==

Decided such a concept likely should not be mentioned in book. The concept appears to be more about having anonymity, and the book doesn't deal so much with establishing anonymity. Knowing how to be anonymous in computing doesn't appear to be that useful to everyday computing conducted by most users. It is perhaps more useful to certain fringe activities, such as the reporting of human rights abuses. Also, such things are very likely well documented elsewhere on the net, probably even being available as free resources.

== Add information under "Geospatial" broad security principles, concerning potential security advantages gained by moving around? ==

Security advantages may be gained by moving around, and computing from different geospatial locations, especially if an adversary is focusing their attacks on a specific geospatial location. Such a strategy is probably documented as some kind of military strategy. The Qubes user "Catacombs" has said that such an approach might be useful for a country like China, presumably because of the totalitarian government there.

Improvements to §"Time based"
The subsection "Based on time taken to forge" should probably be placed under the section "Based on time passed" since it too is based on time passed: security is attained based on how much time has passed, on how much time has not passed perhaps. In a related way, the current content under "Based on time passed", might be best placed under a subsection of that subsection, called something like "Security derived from age". Another subsection could be created under the "Based on time passed" subsection called something like 'based on security credential expiry date'. For example, you may wish to use a private key to create new mobile phone lock passwords that expire at the end of each day (perhaps simply by PGP signing the day's date). If an adversary were to capture the password, perhaps due to your unlocking your phone in a public shopping centre, then because the password would expire at the end of the day, it might mean you still maintain a good level of security.

X-ray and T-ray probably should always have the initial letter capitalised....
X-ray and T-ray probably should always have the initial letter capitalised. If this is the case, correct the mistakes where this has not happened not only in this chapter, but in any other places in the book.

== Rename §⟪Relying on high production cost of certain security tokens⟫ → ⟪Using high-cost-to-forge barriers for greater security⟫? ==

Such generalisation in this "Broad security principles" chapter is generally desirable because the chapter is focused on broad/general principles. It does appear likely that the proposed new name constitutes a distinct and genuine broad security principle.

If such renaming took place, the previous body would perhaps then be again placed under the ⟪Relying on high production cost of certain security tokens⟫ heading but the heading would instead be a sub-heading under the suggested and more general heading of ⟪Using high-cost-to-forge barriers for greater security⟫. Also with such renaming, the "Cryptocurrency-like mining to increase trust" inventions could then be appropriately linked-to as being categorised under the new heading.

Cheap SD cards are a security risk partly because of how cheap they are. An adversary perhaps can replace 1000 cheap SD cards with deceptive espionage-tech-laden fakes without too much difficulty because of their low cost. The same could also be true with BIOS EEPROM chips. However, replacing 1000 expensive SD cards, where the greater expense can be verified using vigorous checks on the higher capacity and/or speed of the SD cards, is probably much more difficult. For both SD cards and EEPROM chips, such greater expense perhaps could also be established by filling each one with a blockchain signing the chip's serial number or some other suitable identifier. The ideas of this paragraph could perhaps be placed under another sub-heading of ⟪Security by costly verifiable features in device⟫. Interestingly, gold-plating EEPROM chips and the like, could perhaps provide such greater security. There would have to be some way for users to authenticate that the gold were genuine, and there appears to be much information on the internet regarding testing the authenticity of gold&mdash;see https://www.wikihow.com/Tell-if-Gold-Is-Real.

This idea of there sometimes being more security due to the higher costs associated with forging, might lead one to believe that the CPU in a computer system is less of a point of attack than the embedded controller in the same system: it's probably generally cheaper to create a fake EC processor than to create a fake CPU. Following a similar line of thought, system-on-a-chip systems (SoCs) may provide a security advantage over other systems having greater numbers of individual components able to be, after manufacture, physically separated and replaced: if you want to put a backdoor in the CPU (for example), you still have to go to the expense of replicating the rest of the SoC's functionality for SoC-based systems; on the other hand, if you are not targeting an SoC-based system, you can just create a fake CPU which would likely be cheaper than making a whole fake SoC.

Perhaps mention 3D printers, and FPGAs programmed as CPUs, in §⟪DIY security principle⟫?
3D printing would seem at times to be an application of the DIY security principle. By 3D printing hardware and other physical objects, you can probably be more confident regarding the integrity of the printed items (especially in respect of there being no hidden espionage tech or other hidden "maltech").

On another note, more related to microchips, FPGAs can be programmed to function as CPUs, in a DIY way, such that certain CPU attacks (such as via hardware backdoors) can be thwarted (see "Verifiable CPU" section at https://www.bunniestudios.com/blog/?p=5706).

These thoughts can perhaps be mentioned in the §⟪DIY security principle⟫. However, because the ideas are quite concrete, perhaps they should be placed elsewhere in the book, either in addition or instead.

How to compare live OS discs obtained using multiple channels, when you have no trusted OS....
In respect of §⟪Using multiple channels to obtain product⟫, a scenario may arise where you have what should be multiple copies of a live OS disc, obtained using multiple channels, but no trusted OS that can be run to do byte-for-byte comparisons of the discs to make sure they are all the same. In such a situation, you can do some form of checking by loading each disc in turn, and then within the OS session loaded for each of the discs, byte-for-byte comparing all the other discs to the particular disc loaded. In such a scenario, none of the OS discs are trusted, but the chances that all of the discs have been compromised, is quite low, and you can leverage such probability, to reach some level of confidence that none of the discs have been compromised, if such be the case, whenever all the just-mentioned byte-for-byte comparisons throw-up no differences (pass successfully).

This principle was developed for a Raspberry Pi project attempting to establish a secure computing environment for business purposes&mdash;see here for more about the project.

Is there a broad security principle based on having a cheap set-up?
There may be a broad security principle based on having a cheap set-up. Such extra security was touched upon in a Raspberry Pi project attempting to establish a secure computing environment for business purposes—see here for more about the project. Essentially, the security advantage I am discerning (that I think probably constitutes a broad security principle), is that if there is ever sufficient reason to believe that such a cheap set-up has become compromised, the user is then able to purchase a brand new non-compromised set-up at a low cost, with the possibility of selling on the old set-up as either spare parts, or advertised as a potentially-compromised system. You could perhaps do the same with an expensive set-up, but the risk of not being able to find buyers for the old system, together with the greater absolute loss incurred when more expensive goods become second-hand, could mean that the financial risk of catering for such contingency is simply too much to bear.

Do avoiding "bells and whistles", trying to be "barebones", and reducing power & capability, constitute a broad security principle?
Such is touched upon in a Raspberry Pi project attempting to establish a secure computing environment for business purposes—see here for more about the project. It is also touched upon in the geospatial broad security principle, when it is mentioned that a user may want to reduce their power and capability by not unlocking their phone in public places. It is also touched upon in other areas of the book (such as in the "Software based" chapter in the consideration of whether the Raspberry Pi Zero device could be used as a secure downloader).

Having "bells and whistles" simply increases security concerns, and when having high security is important, doing away with them when possible is likely a good idea. By moving in this direction, you may end-up with a system that is fairly bare-bones like perhaps some of the Raspberry Pi products, some of the products conforming to the 96Boards specifications, and some very basic non-smart mobile phones (that perhaps are better to use for secure downloading).

Reducing power and capability, seems to be something of a parallel concept to trying to be "barebones". Essentially, security is leveraged at the cost of reducing power and capability. Why leave certain computer ports exposed when you don't really need them? Perhaps disable them for increased security, at the expense of reducing your power and capability.

Add new broad security principle of "Using an intrusion-detection-and-recovery-from-intrusion approach instead of just a tamper-prevention approach"?
Whilst in an ideal world, preventing tampering absolutely might be desirable, realistically, a security approach of intrusion detection coupled with recovery after the detection of such intrusion, might be better. To prevent absolutely all forms of tampering might simply be too costly. Also, it might not have that much of an impact when the probability of tampering is very low. In such regard, it might be easier, and more beneficial, simply to detect intrusion, and then when intrusion is detected, to re-establish your system(s) so as to "eject" out any possible tampering from your system(s).

When using an intrusion-detection-and-recovery-from-intrusion approach, you may want to use cheap components, so that if intrusion is detected, it is not too costly to replace the components with new components that you know have not been compromised (see the "Is there a broad security principle based on having a cheap set-up?" note for more about this).

In respect of trying to lock down the code and data associated with OS installations, bootloaders, BIOSes/UEFIs, and data files, it may be much easier simply to detect intrusion where tampering may have possibly occurred, and then just to reinstall all the data and code from secure backups after such event. This approach is perhaps similar to instead of trying to establish that an OEM installation has no malware in it, simply reinstalling the whole of the OEM setup so as to have certainty over the security of the computer system. The "Digital-storage security through multiple copies of data" note is relevant to such an approach.

Add §⟪Size based⟫?
There appears to be broad security principles around the subject of size. In such a new section, there could be a subsection called something like "bigger things are harder to steal", and a subsection called something like "smaller things are easier to hide". These ideas appear to constitute broad security principles.

By following the principle concerning bigger things, you may choose (for example) to use a big tower desktop computer instead of a small laptop/netbook, because it is easier to spot someone stealing such a big computer than it is for a small laptop/netbook (can't put it "under your jumper" and walk out). Such a big computer may also be cheaper and easier to use when in just one location, which could constitute other reasons to go for such a computer.

By following the principle concerning smaller things, you may choose to store a large amount of data on an SD card that you hide in the lining in your jacket when you are travelling, rather than on a big external HDD/SSD drive. In such instances, greater security might be attained by using a smaller storage medium rather than a bigger storage medium (in contrast to the other size-based principle just mentioned).

These principles are briefly touched upon in the §⟪Physically removing storage component(s) from the rest of the computer system, and then securely storing those components separately⟫. The principles do not necessarily need to apply to physical size. They could, for example, apply to disk-space size; a key file may be easier to hide in email attachments using steganography if it is quite small; in contrast, it may be harder for an adversary to steal a key file through data transfer methods, if the file is extremely large.

Bigger things can be more costly to maintain and an easier means by which adversaries can launch "trojan horse" attacks. In respect of disk space utilisation, one thing perhaps to consider is the malware risks for software taking-up large amounts of disk space: malware checking takes longer, and there is a greater risk of failing to spot malware because of the greater complexity associated with larger space utilisation. Such size limitation as a security principle, is touched upon in the "Dealing with the situation where you want to work with potentially security-compromised equipment/software" note as well as in the security invention mentioned in the "Design feature for enabling the detection of malware in BIOS firmware" note on the talk page of the "New security inventions requiring a non-trivial investment in new technology" chapter.

Add §⟪Having thorough, great, and easy customisation in the building and maintenance of systems⟫?
Custom-building PC/system where in its building, as well as afterwards, great and difficult-to-predict customisation is possible, where components can be easily replaced using commonly-available components, appears to be a good idea.

Great customisation can mean that extra security mechanisms can more easily be implemented, such as replacing an opaque computer case with transparent materials for easier visual-inspection security authentications.

Easy customisation in the maintenance of systems, can mean that if it is suspected that a particular part may have been compromised, it alone can be easily and faithfully replaced cheaply&mdash;the whole system need not be "trashed", only the part to be replaced.

Thorough and great customisation can mean that adversaries cannot much predict beforehand what system the user will have. With prediction, the attacks of adversaries can be more focused, and can exploit the re-usability of attacks formulated previously; without it, adversaries may be at a loss as to what attacks will work even after finding out the system configuration, due to any pre-formulated "canned" attacks failing as a result of the system having been highly customised away from being vulnerable to such "canned" attacks.

Not 100% sure these ideas constitute a broad security principle.

== Concerning §⟪User randomly selecting unit from off physical shelves⟫,        and add §⟪Anonymity based⟫? ==

After trying to put into practice the "User randomly selecting unit from off physical shelves" broad security principle in respect of securely acquiring a smartphone (as advocated presently in the advice given under the §⟪Getting an uncompromised smartphone and obtaining software with it⟫ of the "Software based" chapter), I have run into a few snags. Unfortunately, it appears most physical shops in the south east of England (UK), do not have smartphones actually on shelves where a user can personally pick units themselves with their own hands. Some stores (including the Carphone Warehouse as now currently merged with PC World), will have staff go and get the unit for the model that you pick-out in the front of the store. Unfortunately, this is open to attack by store staff, and completely undermines the security advantage highlighted in the principle. The lack of ability to buy mobile devices by making use of this principle, coupled with the research discovered in the writing of this book, makes me strongly suspicious that such inability to make use of this broad security principle, is a way to leave open the possibility of hacking phones targeted at certain individuals and groups. I was hoping PC World, being a big physical store, would lend well to this security principle, but alas this does not seem to be the case. This notwithstanding, the broad security principle might still be able to be used at certain wholesaler warehouse-type stores such as Costco; however, from photos of the inside of Costco stores, it does appear that probably they too do not keep actual phone units in the main customer area. Being in the midst of this COVID-19 crisis, and with the renewed and even frantic push to switch to online retail, it might be that this broad security principle will not be so good for securely acquiring phones from this time onwards, at least in the south east of England.

Not all hope is lost though, in regard to real physical in-person shopping. Amazon is innovating a new kind of store known as Amazon GO, which is advertised as being a cashier-less kind of store. It is scheduled soon to arrive in the UK, and its aspect of not having cashiers, may mean this broad security principle of random selection might become effective. The hurdles involved in hacking the technology, will likely make hacking it non-existent for the case of targeting individuals with "dodgy" phones.

There is probably a broad security principle that lies in being anonymous. When a person does things with anonymity, it can be harder, at times even impossible, to individually target the person, and this can result in greater security. In any case, somehow the following ideas should be added to the book, whether in a new section for such a broad security principle, or otherwise. The ideas have a strong effect on the advice given in the §⟪Getting an uncompromised smartphone and obtaining software with it⟫ of the "Software based" chapter.

Using the Amazon Hub Locker service in conjunction with Amazon-fulfilled orders, is probably secure if you do not include any identifying details on the delivery address (such as your name). At the fulfilment centre, the Amazon processes are likely to be secure such that staff have no awareness of what order is going to which customer. If delivering to Amazon Hub locker, in manner just mentioned, the delivery staff/driver, will quite likely not know for whom the parcel is. After delivery, the security at the Amazon Hub locker (which uses a digital unlocking pass code sent to the buyer, as well as the aspect of there being several lockers at each site into which the delivery might be delivered [making it harder for individuals to figure out which locker needs to be broken-into for the purpose of targeting you]) will likely be secure enough to prevent people from getting to your locked item. When you receive the email saying that your delivery has arrived, you should not look at the email until you have arrived at the locker. When you arrive, you then look at the email and get the unlock code and locker number (if on the other hand you look at the email quite a while before arriving, someone may be able to intercept the code and locker number whether by means of clandestine photography or psychic interception). The email delivery should be secure enough if you use a mail server that insists on encrypting the transit of emails whenever the mail server on the other end is set-up for such capability (GMAIL servers are examples of such servers), and if also the Amazon mail server has the same behaviour (I would be very surprised if the Amazon mail servers didn't automatically use the standard mail-server-communication encryption, for emails sent to mail servers able to communicate using such encryption technology.)

It is likely important that a big business such as Amazon is used, partly because small businesses don't necessarily have as developed security practices and measures. For example, if you shop on the website of a small business, they might spy on your IP address, and in some cases, use that to target you. Such is probably unlikely with a big business like Amazon because of the likely many technological and organisational barriers to such activity. You could possibly overcome IP-address-based targeting, by using anonymity-oriented practices such as using a short-lived dynamic IP address (for some set-ups, if you just restart your broadband router, you'll get a new IP address), or a VPN.

On the internet, it is advised that greater anonymity can be leveraged by paying for Amazon purchases with Amazon gift cards rather than with a bank card registered to your address and person. Not sure whether such is necessary, but it could help&mdash;using gift cards does seem like a useful idea for generally remaining anonymous across all types of shopping (not just in respect of Amazon shopping).

Interestingly, I looked into whether buying from a third-party seller on Amazon might be secure enough for my purposes, simply because I wanted to save money, and it turns out that buying from a third-party seller might be even more secure&mdash;it is certainly a potential way to save money when trying to acquire goods securely through Amazon. Presently, you still need to make sure that the order is Amazon-fulfilled, as otherwise you are not able to use the Amazon Hub Locker service; such use is required for attaining the needed anonymity. Amazon customer support on 23rd November 2020, said that only the name and delivery address would be passed on to a third-party seller when buying from such a seller through Amazon, and that in particular the email address, phone number(s), bank card details, and billing address would not be passed on to the seller. When using the Amazon Hub Locker system, you should (as articulated above) make sure the delivery address doesn't include your name. In addition to this, when buying from a third-party seller, you should also make sure the name data (outside your delivery-address data) doesn't give indication of your true identity, since the name data might be passed onto the third-party seller and that third-party seller might not be trustworthy (they will quite likely not be as trustworthy as Amazon). Brief internet research, as well as my prior experiences, seem to indicate that it is likely legal in the UK to use a pseudonymous alias in purchases. To effect such use for such purchases, you will need to change the name data both in the delivery address and your name fields to some alias name that doesn't much identify you, and that also doesn't arouse any suspicions&mdash;using a name that might be commonly found in the society in which you live, but that isn't too obvious (perhaps avoid names like Joe Bloggs or John Doe?) might be a good idea. Fortunately, from my analysis of the "Amazon Conditions of Use and Sale" terms dated 29.1.2020, that version of their contract allows for the use of pseudonymous aliases. I can imagine that many Amazon customers want to buy anonymously, and that Amazon has baked this facility into their shopping experience. It's easy to change your name data by simply going into the Amazon account settings for your account, and making the appropriate changes. By following the measures outlined in this paragraph, the third-party seller should hopefully be oblivious about who the purchaser is behind each such order, and so will hopefully not be able to target you on an individual basis (nor pass your details to others for any such targeting). You should then have enough security to buy digital electronics goods such as computers and smartphones, securely.

It should be noted that the Amazon Locker Hub service, appears to have Amazon lockers all over the place where I am in Essex, England, UK. Such widespread proliferation, can increase security. You can, for example, keep on changing the destination locker for each new order you make. You can potentially also choose a locker quite distant from you if you suspect you are being targeted based on local geography.

UPDATE. After applying the principles outlined here regarding making anonymous purchases using the Amazon Locker Hub service to set-up a trusted, low-cost, secure, basic, barebones "Raspberry Pi Zero"-powered system (over the 2020-21 winter), I have reached the conclusion that it is likely that somehow the security of the Amazon Locker Hub service was compromised in my purchases. In particular, keyboard remote control seems to have somehow been achieved. Not sure which component was compromised to achieve such remote control, but if one component was compromised, then any of the other components also could have been compromised in the same or similar ways. Because I was very careful to secure physically the system components when they were in my possession (especially at my premises), I am led to believe that the Hub Locker service was compromised somehow. Keyboard remote control seems to be a particular kind of attack that at least I have experienced often across various computing devices. No idea what is the right next step to take now, as to some extent I appear to have exhausted all avenues. Fortunately, because I baked into the 'protocol' the keeping of financial costs as low as possible, in terms of capital expenditure, I have not lost much in terms of money spent&mdash;the Pi Zero device is about as cheap a general-purpose brand-new computing device you can get.

== Add information under "Geospatial" broad security principles, concerning "...foreign products may also be more immune to local attacks, such as local attacks from various government agencies..." and related ideas? ==

See https://www.raspberrypi.org/forums/viewtopic.php?f=41&t=286049&p=1731799#p1736148

Add information to this chapter on connections between component popularity and "trustability", and design-modification difficulty and "trustability"?
"...Yes, but then the more generic and popular a component is, the greater the review by users is perhaps? Hardware doesn't change often, and many ppl use certain popular CPUs, perhaps leading to some level of trust ("if you haven't heard of any problems with it yet, it's probably okay")? ..." - https://www.raspberrypi.org/forums/viewtopic.php?f=41&t=286049&start=50#p1737381

Substantiation for "Minimally-above-average security" broad security principle
https://security.stackexchange.com/a/2956/247109