Networks (Con’t) Security

• Network Drivers (Con’t) • Security
展开查看详情

1. Goals for Today CS194-24 • Network Drivers (Con’t) Advanced Operating Systems • Security Structures and Implementation Lecture 23 Networks (Con’t) Interactive is important! Security Ask Questions! April 28th, 2014 Prof. John Kubiatowicz http://inst.eecs.berkeley.edu/~cs194-24 Note: Some slides and/or pictures in the following are adapted from slides ©2013 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.2 Recall: A Little Queuing Theory: Some Results Recall: Transmission Control Protocol (TCP) • Assumptions: – System in equilibrium; No limit to the queue Stream in: Stream out: – Time between successive arrivals is random and memoryless ..zyxwvuts Router Router gfedcba Queue Server Arrival Rate Service Rate • Transmission Control Protocol (TCP)  μ=1/Tser – TCP (IP Protocol 6) layered on top of IP • Parameters that describe our system: – Reliable byte stream between two processes on different – : mean number of arriving customers/second machines over Internet (read, write, flush) – Tser: mean time to service a customer (“m1”) • TCP Details – C: squared coefficient of variance = 2/m12 – Fragments byte stream into packets, hands packets to IP – μ: service rate = 1/Tser » IP may also fragment by itself – u: server utilization (0u1): u = /μ =   Tser – Uses window-based acknowledgement protocol (to minimize • Parameters we wish to compute: state at sender and receiver) – Tq: Time spent in queue » “Window” reflects storage at receiver – sender shouldn’t – Lq: Length of queue =   Tq (by Little’s law) overrun receiver’s buffer space • Results: » Also, window should reflect speed/capacity of network – – Memoryless service distribution (C = 1): sender shouldn’t overload network » Called M/M/1 queue: Tq = Tser x u/(1 – u) – Automatically retransmits lost packets – General service distributon (no restrictions), 1 server: – Adjusts rate of transmission to avoid congestion » Called M/G/1 queue: Tq = Tser x ½(1+C) x u/(1 – u)) » A “good citizen” 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.3 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.4

2. TCP Windows and Sequence Numbers Window-Based Acknowledgements (TCP) Sequence Numbers 100 140 190 230 260 300 340 380 400 Size:40 Seq:100 Size:50 Seq:140 Size:40 Seq:190 Size:30 Seq:230 Size:40 Seq:260 Size:40 Seq:300 Size:40 Seq:340 Size:20 Seq:380 Sent Sent Not yet acked not acked sent Sender Received Received Not yet A:100/300 Given to app Buffered received Receiver Seq:100 A:140/260 • Sender has three regions: Seq:140 A:190/210 – Sequence regions Seq:230 A:190/210 » sent and ack’ed » Sent and not ack’ed Seq:260 A:190/210 » not yet sent – Window (colored region) adjusted by sender Seq:300 A:190/210 • Receiver has three regions: – Sequence regions Seq:190 Retransmit! A:340/60 » received and ack’ed (given to application) Seq:340 A:380/20 » received and buffered » not yet received (or discarded because out of order) Seq:380 A:400/0 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.5 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.6 Congestion Avoidance Sequence-Number Initialization • Congestion • How do you choose an initial sequence number? – How long should timeout be for re-sending messages? – When machine boots, ok to start with sequence #0? » Too longwastes time if message lost » No: could send two messages with same sequence #! » Too shortretransmit even though ack will arrive shortly » Receiver might end up discarding valid packets, or duplicate – Stability problem: more congestion  ack is delayed  ack from original transmission might hide lost packet unnecessary timeout  more traffic  more congestion » Closely related to window size at sender: too big means – Also, if it is possible to predict sequence numbers, might putting too much data into network be possible for attacker to hijack TCP connection • How does the sender’s window size get chosen? • Some ways of choosing an initial sequence number: – Must be less than receiver’s advertised buffer size – Time to live: each packet has a deadline. – Try to match the rate of sending packets with the rate » If not delivered in X seconds, then is dropped that the slowest link can accommodate » Thus, can re-use sequence numbers if wait for all packets – Sender uses an adaptive algorithm to decide size of N in flight to be delivered or to expire » Goal: fill network between sender and receiver » Basic technique: slowly increase size of window until – Epoch #: uniquely identifies which set of sequence acknowledgements start being delayed/lost numbers are currently being used • TCP solution: “slow start” (start sending slowly) » Epoch # stored on disk, Put in every message – If no timeout, slowly increase window size (throughput) » Epoch # incremented on crash and/or when run out of by 1 for each ack received sequence # – Timeout  congestion, so cut window size in half – Pseudo-random increment to previous sequence number – “Additive Increase, Multiplicative Decrease” » Used by several protocol implementations 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.7 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.8

3. Recall: Socket Setup (Con’t) Linux Network Architecture Server Socket new socket socket connection socket Client Server • Things to remember: – Connection involves 5 values: [ Client Addr, Client Port, Server Addr, Server Port, Protocol ] – Often, Client Port “randomly” assigned » Done by OS during client socket setup – Server Port often “well known” » 80 (web), 443 (secure web), 25 (sendmail), etc » Well-known ports from 0—1023 • Note that the uniqueness of the tuple is really about two Addr/Port pairs and a protocol 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.9 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.10 Network Details: sk_buff structure Headers, Fragments, and All That • Socket Buffers: sk_buff structure • The “linear region”: – The I/O buffers of sockets are lists of sk_buff – Space from skb->data to skb->end – Actual data from skb->head to skb->tail » Pointers to such structures usually called “skb” – Header pointers point to parts of packet – Complex structures with lots of manipulation routines • The fragments (in skb_shared_info): – Packet is linked list of sk_buff structures – Right after skb->end, each fragment has pointer to pages, start of data, and length 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.11 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.12

4. Copies, manipulation, etc Network Processing Contexts • Lots of sk_buff manipulation functions for: – removing and adding headers, merging data, pulling it up into linear region – Copying/cloning sk_buff structures 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.13 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.14 Avoiding Interrupts: NAPI Administrivia • Get moving on Lab 4! – Should be well on your way to understanding the virtual device that you are designing… • Final: Tuesday May 13th – 310 Soda Hall – 11:30—2:30 – Bring calculator, 2 pages of hand-written notes • Don’t forget final Lecture during RRR – Next Monday. Send me final topics! – I don’t really have a lot of topics yet! • New API (NAPI): Use polling to receive packets – Right now I could talk about: – Only some drivers actually implement this » Mobile Operating Systems (iOS/Android) • Exit hard interrupt context as quickly as possible » Talk about Swarm Lab – Do housekeeping and free up sent packets – Schedule soft interrupt for further actions • Soft Interrupts: Handles receiption and delivery 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.15 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.16

5. Recall: How does processor actually talk to the device? Recall: Memory-Mapped Display Controller Processor Memory Bus Regular • Memory-Mapped: Memory – Hardware maps control registers CPU Bus Bus and display memory into physical 0x80020000 Graphics Adaptor Adaptor Device Address+ Controller address space Command Other Devices Data » Addresses set by hardware jumpers Queue Bus Hardware or programming at boot time Interrupt or Buses 0x80010000 Interface Controller Controller Interrupt Request – Simply writing to display memory Display read (also called the “frame buffer”) Memory Addressable write changes image on screen • CPU interacts with a Controller control Memory » Addr: 0x8000F000—0x8000FFFF 0x8000F000 status and/or – Contains a set of registers that Registers Queues – Writing graphics description to can be read and written (port 0x20) command-queue area – May contain memory for request Memory Mapped 0x0007F004 Command » Say enter a set of triangles that queues or bit-mapped images Region: 0x8f008020 describe some scene 0x0007F000 Status • Regardless of the complexity of the connections and » Addr: 0x80010000—0x8001FFFF buses, processor accesses registers in two ways: – Writing to the command register – I/O instructions: in/out instructions may cause on-board graphics » Example from the Intel architecture: out 0x21,AL hardware to do something » Say render the above scene Physical Address – Memory mapped I/O: load/store instructions » Registers/memory appear in physical address space » Addr: 0x0007F004 Space » I/O accomplished with load and store instructions • Can protect with page tables 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.17 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.18 What about Protection? Protection vs Security • Start by asking some high-level questions… • Security is a very complex topic: see, i.e. CS161 – What do we expect of our systems? – Security is about Policy, i.e. what human-centered » Won’t leak our information properties do we want from our system » Won’t lose our information » Usually with reference to an attack model » Will always work when we need them – Security is achieved through a series of Mechanisms, » Won’t launch attacks against other people i.e. individual elements of the system combined together – How can we prevent systems from misbehaving? to achieve a security policy » Never connect them to the network? • Security: use of protection mechanisms to prevent » Always authenticate users? misuse of resources » Never use them? – Misuse defined with respect to policy • Protection: use of one or more mechanisms for controlling » E.g.: prevent exposure of certain sensitive information the access of programs, processes, or users to resources » E.g.: prevent unauthorized modification/deletion of data – Page Table Mechanism – Requires consideration of the external environment – File Access Mechanism within which the system operates – On-disk encryption » Most well-constructed system cannot protect information • Can use lots of Protection but still have an insecure system! if user accidentally reveals password – Bugs, back doors, viruses, poorly defined policy, inside man – Denial of service, … 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.19 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.20

6. Preventing Misuse Authentication: Identifying Users • Types of Misuse: • How to identify users to the system? – Accidental: – Passwords » If I delete shell, can’t log in to fix it! » Shared secret between two parties » Could make it more difficult by asking: “do you really want » Since only user knows password, someone types correct to delete the shell?” password  must be user typing it – Intentional: » Very common technique » Some high school brat who can’t get a date, so instead he – Smart Cards transfers $3 billion from B to A. » Electronics embedded in card capable of » Doesn’t help to ask if they want to do it (of course!) providing long passwords or satisfying challenge  response queries • Three Pieces to Security » May have display to allow reading of password – Authentication: who the user actually is » Or can be plugged in directly; several – Authorization: who is allowed to do what credit cards now in this category – Enforcement: make sure people do only what they are – Biometrics supposed to do » Use of one or more intrinsic physical or behavioral traits to identify someone • Loopholes in any carefully constructed system: » Examples: fingerprint reader, – Log in as superuser and you’ve circumvented palm reader, retinal scan authentication » Becoming quite a bit more common – Log in as self and can do anything with your resources; • Two-factor authentication: use two or more types of for instance: run program that erases all of your files authentication – Can you trust software to correctly enforce • What else? Authentication and Authorization????? – Consider the “Swarm” and “Un-pad” views 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.21 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.22 Timing Attacks: Tenex Password Checking Defeating Password Checking • Tenex used VM, and it interacts badly with the above code • Tenex – early 70’s, BBN – Key idea: force page faults at inopportune times to break – Most popular system at universities before UNIX passwords quickly – Thought to be very secure, gave “red team” all the • Arrange 1st char in string to be last char in pg, rest on next pg source code and documentation (want code to be publicly available, as in UNIX) – Then arrange for pg with 1st char to be in memory, and rest to be on disk (e.g., ref lots of other pgs, then ref 1st page) – In 48 hours, they figured out how to get every password in the system a|aaaaaa • Here’s the code for the password check: | page in memory| page on disk for (i = 0; i < 8; i++) if (userPasswd[i] != realPasswd[i]) • Time password check to determine if first character is correct! go to error – If fast, 1st char is wrong • How many combinations of passwords? – If slow, 1st char is right, pg fault, one of the others wrong – 2568? – So try all first characters, until one is slow – Wrong! – Repeat with first two characters in memory, rest on disk • Only 256 * 8 attempts to crack passwords – Fix is easy, don’t stop until you look at all the characters 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.23 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.24

7. Authorization: Who Can Do What? Authorization Continued • How do we decide who is authorized • Principle of least privilege: programs, users, and to do actions in the system? systems should get only enough privileges to perform • Access Control Matrix: contains their tasks all permissions in the system – Very hard to manage in practice – Resources across top » How do you figure out what the minimum set of privileges » Files, Devices, etc… is needed to run your programs? – Domains in columns – People often run at higher privilege then necessary » A domain might be a user or a group of permissions » Such as the “administrator” privilege under windows or » E.g. above: User D3 can read F2 or execute F3 “root” under Unix – In practice, table would be huge and sparse! • What form does this privilege take? • Two approaches to implementation – A set of Capabilities? – Access Control Lists: store permissions with each object » Give a user the minimal set of possible access » Still might be lots of users! » Like giving a minimal set of physical keys to someone » UNIX limits each file to: r,w,x for owner, group, world – Hand-craft a special user for every task? » More recent systems allow definition of groups of users » Look in your password file – Linux does this all the time and permissions for each group – Capability List: each process tracks objects has » Custom users and groups for particular tasks permission to touch » Popular in the past, idea out of favor today » Consider page table: Each process has list of pages it has access to, not each page has list of processes … 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.25 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.26 Enforcement Mandatory Access Control (MAC) • Enforcer checks passwords, ACLs, etc • Mandatory Access Control (MAC) – Makes sure the only authorized actions take place – “A Type of Access control by which the operating system – Bugs in enforcerthings for malicious users to exploit constraints the ability of a subject or initiator to access • Normally, in UNIX, superuser can do anything or generally perform some sort of operation on an object – Because of coarse-grained access control, lots of stuff or target.” From Wikipedia has to run as superuser in order to work – If there is a bug in any one of these programs, you lose! – Subject: a process or thread • Paradox – Object: files, directories, TCP/UDP ports, etc – Bullet-proof enforcer – Security policy is centrally controlled by a security policy » Only known way is to make enforcer as small as possible administrator: users not allowed to operate outside the policy » Easier to make correct, but simple-minded protection model – Fancy protection – Examples: SELinux, HiStar, etc. » Tries to adhere to principle of least privilege • Contrast: Discretionary Access Control (DAC) » Really hard to get right – Access restricted based on the identity of subjects • Same argument for Java or C++: What do you make and/or groups to which they blong private vs public? – Controls are discretionary – a subject with a certain – Hard to make sure that code is usable but only necessary access permission is capable of passing that permission on modules are public to any other subject – Pick something in middle? Get bugs and weak protection! – Standard UNIX model 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.27 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.28

8. Isolate Information Flow (HiStar) HiStar Virus Scanner Example Entity Data Entity Allow Communication A Flow B if LA  LB Label: LA Label: LB • Mandatory Access Control on Entities (Files, Processes, …) – Labels are sets of pairs of (Categories, Level): • Bob’s Files Marked as {br3, bw0, 1} Lx={ (c1,l1), (c2,l2), … ldefault } • User login for Bob creates process {br*, bw*, 1} – Launches wrapper program which allocates v » Think of levels as a “security clearance” (Special declassification level “*”) • Wrapper launches scanner with taint v3 » Can be compared, i.e. L1  L2 if h, L1(h) ≤ L2(h) – Temp directory marked {br3, v3, 1} » “*” treated specially: lower than anything on left and higher – Can not write Bob’s files, since less tainted (1) in category v than anything on right than scanner is (which is 3) – Communication from A to B allowed only if LA  LB – Scanner can read from Virus DB, cannot write to anything » i.e. only if B’s label has equivalent or higher clearance in except through wrapper program (which decides how to every category than A’s label declassify information tagged with v 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.29 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.30 SELinux: Secure-Enhanced Linux SELinux Domain-type Enforcement • SELinux: a Linux feature that provides the mechanisms for access control polices including MAC • Each object is labeled by a type – A set of kernel modifications and user-space tools added to – Object semantics various Linux distributions – Separate enforcement of security decisions from policy – Example: – Integrated into mainline Linux kernel since version 2.6 » /etc/shadow etc_t • Originally started by the Information Assurance Research » /etc/rc.d/init.d/httpd httpd_script_exec_t Group of the NSA, working with Secure Computing Corporation • Objects are grouped by object security classes • Security labels: tuple of role:user:domain – Such as files, sockets, IPC channels, capabilities – SELinux assigns a three string context consisting of a role, user name, and domain (or type) to every user and process – The security class determines what operations can be – Files, network ports, and hardware also labeled with performed on the object SELinux labels of name:role:type – Usually all real users share same Selinux user (“user_t”) • Each subject (process) is associated with a domain • Policy – E.g., httpd_t, sshd_t, sendmail_t – A set of rules specify which operations can be performed by an entity with a given label on an entity with a given label – Also, policy specifies which domain transitions can occur CS426 Fall 2010/Lecture 28 32 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.31 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.32

9. Example Limitations of the Type Enforcement Model • Execute the command “ls -Z /usr/bin/passwd” • Result in very large policies – This will produce the output: -r-s—x—x root root system_u:object_r:passwd_exec_t – Hundreds of thousands of rules for Linux /usr/bin/passwd – Difficult to understood – Using this provided information, we can then create rules to have a domain transition. • Three rules are required to give the user the ability to do a • Using only programs, but not information flow domain transition to the password file: tracking cannot protect against certain attacks – allow user_t passwd_exec_t : file {getattr execute}; » Lets user_t execute an execve() system call on – Consider for example: httpd -> shell -> load passwd_exec_t kernel module – allow passwd_t passwd_exec_t : file entrypoint; » This rule provides entrypoint access to the passwd_t domain, entrypoint defines which executable files can “enter” a domain. – allow user_t passwd_t : process transition; » The original type (user_t) must have transition permission to the new type (passwd_t) for the domain transition to be allowed. 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.33 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.34 Data Centric Access Control (DCAC?) Recall: Authentication in Distributed Systems • Problem with many current models: • What if identity must be established across network? – If you break into OS  data is compromised – In reality, it is the data that matters – hardware is Network somewhat irrelevant (and ubiquitous) PASS: gina • Data-Centric Access Control (DCAC) – I just made this term up, but you get the idea – Protect data at all costs, assume that software might be compromised – Requires encryption and sandboxing techniques – Need way to prevent exposure of information while still proving identity to remote system – If hardware (or virtual machine) has the right – Many of the original UNIX tools sent passwords over the cryptographic keys, then data is released wire “in clear text” • All of the previous authorization and enforcement » E.g.: telnet, ftp, yp (yellow pages, for distributed login) mechanisms reduce to key distribution and protection » Result: Snooping programs widespread – Never let decrypted data or keys outside sandbox • What do we need? Cannot rely on physical security! – Examples: Use of TPM, virtual machine mechanisms – Encryption: Privacy, restrict receivers – Authentication: Remote Authenticity, restrict senders 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.35 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.36

10. Recall: Private Key Cryptography Recall: Public Key Encryption Details • Private Key (Symmetric) Encryption: • Idea: Kpublic can be made public, keep Kprivate private – Single key used for both encryption and decryption Insecure Channel • Plaintext: Unencrypted Version of message • Ciphertext: Encrypted Version of message Bpublic Bprivate Aprivate Apublic Encrypt Decrypt Alice Bob Plaintext Plaintext Insecure Insecure Channel Transmission • Gives message privacy (restricted receiver): SPY (ciphertext) CIA – Public keys (secure destination points) can be acquired Key Key by anyone/used by anyone • Important properties – Only person with private key can decrypt message – Can’t derive plain text from ciphertext (decode) without • What about authentication? access to key – Use combination of private and public key – Can’t derive key from plain text and ciphertext – AliceBob: [(I’m Alice)Aprivate Rest of message]Bpublic – As long as password stays secret, get both secrecy and – Provides restricted sender and receiver authentication • But: how does Alice know that it was Bob who sent • Symmetric Key Algorithms: DES, Triple-DES, AES her Bpublic? And vice versa… 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.37 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.38 Recall: Secure Hash Function Use of Hash Functions Hash DFCD3454BBEA788A • Several Standard Hash Functions: Fox 751A696C24D97009 – MD5: 128-bit output Function – SHA-1: 160-bit output, SHA-256: 256-bit output CA992D17 The red fox 52ED879E70F71D92 • Can we use hashing to securely reduce load on server? Hash runs across 6EB6957008E03CE4 the ice Function CA6945D3 – Yes. Use a series of insecure mirror servers (caches) • Hash Function: Short summary of data (message) – First, ask server for digest of desired file – For instance, h1=H(M1) is the hash of message M1 » Use secure channel with server » h1 fixed length, despite size of message M1. – Then ask mirror server for file » Often, h1 is called the “digest” of M1. » Can be insecure channel • Hash function H is considered secure if » Check digest of result and catch faulty or malicious mirrors – It is infeasible to find M2 with h1=H(M2); ie. can’t easily File X find other message with same digest as given message. Insecure Read X Data – It is infeasible to locate two messages, m1 and m2, which “collide”, i.e. for which H(m1) = H(m2) Mirror File X – A small change in a message changes many bits of digest/can’t tell anything about message given its hash Read File X Here is hx = H(X) 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.39 4/28/14 Client Kubiatowicz CS194-24 ©UCB Fall 2014 Server Lec 23.40

11. Signatures/Certificate Authorities How to perform Authorization for Distributed Systems? • Can use Xpublic for person X to define their identity – Presumably they are the only ones who know Xprivate. – Often, we think of Xpublic as a “principle” (user) Different • Suppose we want X to sign message M? – Use private key to encrypt the digest, i.e. H(M)Xprivate Authorization – Send both M and its signature: Domains » Signed message = [M,H(M)Xprivate] – Now, anyone can verify that M was signed by X » Simply decrypt the digest with Xpublic • Issues: Are all user names in world unique? » Verify that result matches H(M) – No! They only have small number of characters • Now: How do we know that the version of Xpublic that » kubi@mit.edu  kubitron@lcs.mit.edu  we have is really from X??? kubitron@cs.berkeley.edu – Answer: Certificate Authority » However, someone thought their friend was kubi@mit.edu » Examples: Verisign, Entrust, Etc. and I got very private email intended for someone else… – X goes to organization, presents identifying papers – Need something better, more unique to identify person » Organization signs X’s key: [ Xpublic, H(Xpublic)CAprivate] • Suppose want to connect with any server at any time? » Called a “Certificate” – Before we use Xpublic, ask X for certificate verifying key – Need an account on every machine! (possibly with » Check that signature over Xpublic produced by trusted different user name for each account) authority – OR: Need to use something more universal as identity • How do we get keys of certificate authority? » Public Keys! (Called “Principles”) – Compiled into your browser, for instance! » People are their public keys 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.41 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.42 Distributed Access Control Analysis of Previous Scheme Access Control List (ACL) for X: • Positive Points: File X ACL verifier – Identities checked via signatures and public keys Owner Key: Hash, Timestamp, R: Key: 0x546DFEFA34… » Client can’t generate request for data unless they have 0x22347EF… Signature (owner) RW: Key: 0x467D34EF83… private key to go with their public identity » Server won’t use ACLs not properly signed by owner of file RX: Group Key: 0xA2D3498672… – No problems with multiple domains, since identities Server 1: Domain 2 Group designed to be cross-domain (public keys domain neutral) Read GACL • Revocation: Client 1 Group ACL: – What if someone steals your private key? Domain 1 GACL verifier Key: 0xA786EF889A… » Need to walk through all ACLs with your key and change…! Hash, Timestamp, » This is very expensive Signature (group) Key: 0x6647DBC9AC… – Better to have unique string identifying you that people Server 2: Domain 3 place into ACLs » Then, ask Certificate Authority to give you a certificate • Distributed Access Control List (ACL) matching unique string to your current public key – Contains list of attributes (Read, Write, Execute, etc) » Client Request: (request + unique ID)Cprivate; give server with attached identities (Here, we show public keys) certificate if they ask for it. » ACLs signed by owner of file, only changeable by owner » Key compromisemust distribute “certificate revocation”, » Group lists signed by group key since can’t wait for previous certificate to expire. – ACLs can be on different servers than data – What if you remove someone from ACL of a given file? » Signatures allow us to validate them » If server caches old ACL, then person retains access! » ACLs could even be stored separately from verifiers » Here, cache inconsistency leads to security violations! 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.43 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.44

12. Analysis Continued Distributed Decision Making • Who signs the data? • Why is distributed decision making desirable? – Or: How does client know they are getting valid data? – Fault Tolerance! – Signed by server? – Group of machines comes to decision even if one or more fail » What if server compromised? Should client trust server? – Signed by owner of file? » Simple failure mode called “failstop” (is this realistic?) » Better, but now only owner can update file! – After decision made, result recorded in multiple places » Pretty inconvenient! • Two-Phase Commit protocol does this – Signed by group of servers that accepted latest update? – Stable log on each machine tracks whether commit has happened » If must have signatures from all servers  Safe, but one bad server can prevent update from happening » If a machine crashes, when it wakes up it first checks its log to recover state of world at time of crash » Instead: ask for a threshold number of signatures » Byzantine agreement can help here – Prepare Phase: • How do you know that data is up-to-date? » The global coordinator requests that all participants will promise to – Valid signature only means data is valid older version commit or rollback the transaction – Freshness attack: » Participants record promise in log, then acknowledge » Malicious server returns old data instead of recent data » If anyone votes to abort, coordinator writes “Abort” in its log and » Problem with both ACLs and data tells everyone to abort; each records “Abort” in log » E.g.: you just got a raise, but enemy breaks into a server – Commit Phase: and prevents payroll from seeing latest version of update » After all participants respond that they are prepared, then the – Hard problem coordinator writes “Commit” to its log » Needs to be fixed by invalidating old copies or having a » Then asks all nodes to commit; they respond with ack trusted group of servers (Byzantine Agrement?) » After receive acks, coordinator writes “Got Commit” to log – Log helps ensure all machines either commit or don’t commit 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.45 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.46 Distributed Decision Making Discussion (Con’t) Byzantine General’s Problem • Undesirable feature of Two-Phase Commit: Blocking Lieutenant – One machine can be stalled until another site recovers: » Site B writes “prepared to commit” record to its log, sends a “yes” vote to the coordinator (site A) and crashes Retreat! » Site A crashes Attack! Lieutenant » Site B wakes up, check its log, and realizes that it has voted “yes” on the update. It sends a message to site A asking what happened. At this point, B cannot decide to General abort, because update may have committed » B is blocked until A comes back Malicious! Lieutenant – A blocked site holds resources (locks on updated items, pages pinned in memory, etc) until learns fate of update • Byazantine General’s Problem (n players): – One General • Alternative: There are alternatives such as “Three – n-1 Lieutenants Phase Commit” which don’t have this blocking problem – Some number of these (f) can be insane or malicious • What happens if one or more of the nodes is • The commanding general must send an order to his n-1 malicious? lieutenants such that: – IC1: All loyal lieutenants obey the same order – Malicious: attempting to compromise the decision making – IC2: If the commanding general is loyal, then all loyal lieutenants obey the order he sends 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.47 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.48

13. Byzantine General’s Problem (con’t) Trusted Computing • Impossibility Results: • Problem: Can’t trust that software is correct – Cannot solve Byzantine General’s Problem with n=3 – Viruses/Worms install themselves into kernel or system because one malicious player can mess up things without users knowledge – Rootkit: software tools to conceal running processes, files General General or system data, which helps an intruder maintain access Attack! Attack! Attack! Retreat! to a system without the user's knowledge Lieutenant Lieutenant Lieutenant Lieutenant – How do you know that software won’t leak private Retreat! Retreat! information or further compromise user’s access? – With f faults, need n > 3f to solve problem • A solution: What if there were a secure way to validate all software running on system? • Various algorithms exist to solve problem – Idea: Compute a cryptographic hash of BIOS, Kernel, – Original algorithm has #messages exponential in n crucial programs, etc. – Newer algorithms have message complexity O(n2) – Then, if hashes don’t match, know have problem » One from MIT, for instance (Castro and Liskov, 1999) • Use of BFT (Byzantine Fault Tolerance) algorithm • Further extension: – Secure attestation: ability to prove to a remote party – Allow multiple machines to make a coordinated decision that local machine is running correct software even if some subset of them (< n/3 ) are malicious – Reason: allow remote user to avoid interacting with compromised system Distributed • Challenge: How to do this in an unhackable way Request – Must have hardware components somewhere Decision 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.49 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.50 TCPA: Trusted Computing Platform Alliance Trusted Platform Module • Idea: Add a Trusted Platform Module (TPM) Functional Non-volatile Volatile Units Memory Memory • Founded in 1999: Compaq, HP, IBM, Intel, Microsoft Random Num Endorsement Key RSA Key Slot-0 • Currently more than 200 members Generator (2048 Bits) … SHA-1 Storage Root Key RSA Key Slot-9 • Changes to platform Hash (2048 Bits) PCR-0 – Extra: Trusted Platform Module (TPM) HMAC Owner Auth … Secret(160 Bits) PCR-15 – Software changes: BIOS + OS RSA Encrypt/ Decrypt Key Handles • Main properties RSA Key Auth Session Generation Handles – Secure bootstrap – Platform attestation • Cryptographic operations – Hashing: SHA-1, HMAC – Protected storage – Random number generator • Microsoft version: – Asymmetric key generation: RSA (512, 1024, 2048) ATMEL TPM Chip – Asymmetric encryption/ decryption: RSA – Palladium (Used in IBM equipment) Symmetric encryption/ decryption: DES, 3DES (AES) – – Note quite same: More extensive • Tamper resistant (hash and key) storage hardware/software system 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.51 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.52

14. TCPA: PCR Reporting Value TCPA: Secure bootstrap Platform Configuration Register extended value present value measured values Option Hash Concatenate Hardware Memory TPM ROMs • Platform Configuration Registers (PCR0-16) – Reset at boot time to well defined value BIOS OS boot BIOS OS Application Network – Only thing that software can do is give new block loader measured value to TPM » TPM takes new value, concatenates with old value, Root of trust in then hashes result together for new PCR integrity New OS Component • Measuring involves hashing components of software measurement TPM • Integrity reporting: report the value of the PCR measuring – Challenge-response protocol: Root of trust in reporting integrity reporting storing values Challenger nonce Trusted Platform Agent logging methods TPM SignID(nonce, PCR, log), CID 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.53 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.54 Implications of TPM Philosophy? Summary • Could have great benefits • Mandatory Access Control (MAC) – Prevent use of malicious software – Separate access policy from use – Parts of OceanStore would benefit – Examples: HiStar, SELinux • What does “trusted computing” really mean? • Distributed identity – You are forced to trust hardware to be correct! – Use cryptography (Public Key, Signed by PKI) – Could also mean that user is not trusted to install • Distributed storage example their own software – Revocation: How to remove permissions from someone? • Many in the security community have talked about – Integrity: How to know whether data is valid potential abuses – Freshness: How to know whether data is recent – These are only theoretical, but very possible – Software fixing • Byzantine General’s Problem: distributed decision making with malicious failures » What if companies prevent user from accessing their websites with non-Microsoft browser? – One general, n-1 lieutenants: some number of them may be malicious (often “f” of them) » Possible to encrypt data and only decrypt if software still matches  Could prevent display of .doc files – All non-malicious lieutenants must come to same decision except on Microsoft versions of software – If general not malicious, lieutenants must follow general – Digital Rights Management (DRM): – Only solvable if n  3f+1 » Prevent playing of music/video except on accepted • OceanStore: Distributed Storage in Untrusted World players » Selling of CDs that only play 3 times? 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.55 4/28/14 Kubiatowicz CS194-24 ©UCB Fall 2014 Lec 23.56