Toorcon14
- by danx
Toorcon 2012 Information Security Conference
San Diego, CA,
http://www.toorcon.org/
Dan Anderson, October 2012
It's almost Halloween, and we all know what that means—yes, of course, it's time for another Toorcon Conference!
Toorcon is an annual conference for people interested in computer security.
This includes the whole range of
hackers, computer hobbyists, professionals, security consultants, press,
law enforcement, prosecutors, FBI, etc.
We're at Toorcon 14—see earlier blogs for some of the previous Toorcon's I've attended (back to 2003).
This year's "con" was held at the Westin on Broadway in downtown San Diego, California.
The following are not necessarily my views—I'm just the messenger—although I could have misquoted or misparaphrased the speakers.
Also, I only reviewed some of the talks, below, which I attended and interested me.
MalAndroid—the Crux of Android Infections,
Aditya K. Sood
Programming Weird Machines with ELF Metadata,
Rebecca "bx" Shapiro
Privacy at the Handset: New FCC Rules?,
Valkyrie
Hacking Measured Boot and UEFI,
Dan Griffin
You Can't Buy Security: Building the Open Source InfoSec Program,
Boris Sverdlik
What Journalists Want: The Investigative Reporters' Perspective on Hacking,
Dave Maas & Jason Leopold
Accessibility and Security,
Anna Shubina
Stop Patching, for Stronger PCI Compliance,
Adam Brand
McAfee Secure & Trustmarks — a Hacker's Best Friend,
Jay James & Shane MacDougall
MalAndroid—the Crux of Android Infections
Aditya K. Sood, IOActive, Michigan State PhD candidate
Aditya talked about Android smartphone malware.
There's a lot of old Android software out there—over 50% Gingerbread (2.3.x)—and most have unpatched vulnerabilities.
Of 9 Android vulnerabilities, 8 have known exploits (such as the old Gingerbread Global Object Table exploit).
Android protection includes sandboxing, security scanner, app permissions, and screened Android app market.
The Android permission checker has fine-grain resource control, policy enforcement.
Android static analysis also includes a static analysis app checker (bouncer), and a vulnerablity checker.
What security problems does Android have?
User-centric security, which depends on the user to grant permission and make smart decisions.
But users don't care or think about malware (the're not aware, not paranoid).
All they want is functionality, extensibility, mobility
Android had no "proper" encryption before Android 3.0
No built-in protection against social engineering and web tricks
Alternative Android app markets are unsafe.
Simply visiting some markets can infect Android
Aditya classified Android Malware types as:
Type A—Apps. These interact with the Android app framework.
For example, a fake Netflix app.
Or Android Gold Dream (game), which uploads user files stealthy manner to a remote location.
Type K—Kernel. Exploits underlying Linux libraries or kernel
Type H—Hybrid. These
use multiple layers (app framework, libraries, kernel).
These are most commonly used by Android botnets,
which are popular with Chinese botnet authors
What are the threats from Android malware?
These incude
leak info (contacts),
banking fraud,
corporate network attacks,
malware advertising,
malware "Hackivism" (the promotion of social causes. For example, promiting specific leaders of the Tunisian or Iranian revolutions.
Android malware is frequently "masquerated".
That is, repackaged inside a legit app with malware.
To avoid detection, the hidden malware is not unwrapped until runtime.
The malware payload can be hidden in, for example, PNG files.
Less common are Android bootkits—there's not many around.
What they do is hijack the Android init framework—alteering system programs and daemons, then deletes itself.
For example, the DKF Bootkit (China).
Android App Problems:
no code signing! all self-signed
native code execution
permission sandbox — all or none
alternate market places
no robust Android malware detection at network level
delayed patch process
Programming Weird Machines with ELF Metadata
Rebecca "bx" Shapiro, Dartmouth College, NH
https://github.com/bx/elf-bf-tools
@bxsays on twitter
Definitions.
"ELF" is an executable file format used in linking and loading executables
(on UNIX/Linux-class machines).
"Weird machine" uses undocumented computation sources
(I think of them as unintended virtual machines).
Some examples of "weird machines" are those that:
return to weird location,
does SQL injection,
corrupts the heap.
Bx then talked about using ELF metadata as (an uintended) "weird machine".
Some ELF background:
A compiler takes source code and generates a ELF object file (hello.o).
A static linker makes an ELF executable from the object file.
A runtime linker and loader takes ELF executable and loads and relocates it in memory.
The ELF file has symbols to relocate functions and variables.
ELF has two relocation tables—one at link time and another one at loading time:
.rela.dyn (link time) and .dynsym (dynamic table).
GOT: Global Offset Table of addresses for dynamically-linked functions.
PLT: Procedure Linkage Tables—works with GOT.
The memory layout of a process (not the ELF file) is, in order:
program (+ heap), dynamic libraries, libc, ld.so, stack
(which includes the dynamic table loaded into memory)
For ELF, the "weird machine" is found and exploited in the loader.
ELF can be crafted for executing viruses, by
tricking runtime into executing interpreted "code" in the ELF symbol table.
One can inject parasitic "code" without modifying the actual ELF code portions.
Think of the ELF symbol table as an "assembly language" interpreter.
It has these elements:
instructions: Add, move, jump if not 0 (jnz)
Think of symbol table entries as "registers"
symbol table value is "contents"
immediate values are constants
direct values are addresses (e.g., 0xdeadbeef)
move instruction: is a relocation table entry
add instruction: relocation table "addend" entry
jnz instruction: takes multiple relocation table entries
The ELF weird machine
exploits the loader by relocating relocation table entries.
The loader will go on forever until told to stop.
It stores state on stack at "end" and
uses IFUNC table entries (containing function pointer address).
The ELF weird machine, called "Brainfu*k" (BF) has:
8 instructions:
pointer inc, dec, inc indirect, dec indirect, jump forward, jump backward, print.
Three registers
- 3 registers
Bx showed example BF source code that implemented a Turing machine printing "hello, world".
More interesting was the next demo, where bx modified ping.
Ping runs suid as root, but quickly drops privilege.
BF modified the loader to disable the library function call dropping privilege, so it remained as root.
Then BF modified the ping -t argument to execute the -t filename as root.
It's best to show what this modified ping does with an example:
$ whoami
bx
$ ping localhost -t backdoor.sh # executes backdoor
$ whoami
root
$
The modified code increased from 285948 bytes to 290209 bytes.
A BF tool compiles "executable" by modifying the symbol table in an existing ELF executable.
The tool modifies .dynsym and .rela.dyn table, but not code or data.
Privacy at the Handset: New FCC Rules?
"Valkyrie" (Christie Dudley, Santa Clara Law JD candidate)
Valkyrie talked about mobile handset privacy.
Some background:
Senator Franken (also a comedian) became alarmed about CarrierIQ, where the carriers track their customers.
Franken asked the FCC to find out what obligations carriers think they have to protect privacy.
The carriers' response was that they are doing just fine with self-regulation—no worries!
Carriers need to collect data, such as missed calls, to maintain network quality.
But carriers also sell data for marketing.
Verizon sells customer data and enables this with a narrow privacy policy (only 1 month to opt out, with difficulties).
The data sold is not individually identifiable and is aggregated.
But Verizon recommends, as an aggregation workaround to "recollate" data to other databases to identify customers indirectly.
The FCC has regulated telephone privacy since 1934 and mobile network privacy since 2007.
Also, the carriers say mobile phone privacy is a FTC responsibility (not FCC).
FTC is trying to improve mobile app privacy,
but FTC has no authority over carrier / customer relationships.
As a side note, Apple iPhones are unique as carriers have extra control over iPhones they don't have with other smartphones. As a result iPhones may be more regulated.
Who are the consumer advocates?
Everyone knows EFF, but EPIC (Electrnic Privacy Info Center), although more obsecure, is more relevant.
What to do?
Carriers must be accountable. Opt-in and opt-out at any time.
Carriers need incentive to grant users control for those who want it,
by holding them liable and responsible for breeches on their clock.
Location information should be added current CPNI privacy protection, and require "Pen/trap" judicial order to obtain
(and would still be a lower standard than 4th Amendment).
Politics are on a pro-privacy swing now, with many senators and the Whitehouse.
There will probably be new regulation soon, and enforcement will be a problem, but consumers will still have some benefit.
Hacking Measured Boot and UEFI
Dan Griffin, JWSecure, Inc., Seattle, @JWSdan
Dan talked about hacking measured UEFI boot.
First some terms:
UEFI is a boot technology that is replacing BIOS (has whitelisting and blacklisting).
UEFI protects devices against rootkits.
TPM - hardware security device to store hashs and hardware-protected keys
"secure boot" can control at firmware level what boot images can boot
"measured boot" OS feature that tracks hashes (from BIOS, boot loader, krnel, early drivers).
"remote attestation" allows remote validation and control based on policy on a remote attestation server.
Microsoft pushing TPM (Windows 8 required), but Google is not.
Intel TianoCore is the only open source for UEFI.
Dan has Measured Boot Tool at
http://mbt.codeplex.com/
with a demo where you can also view TPM data.
TPM support already on enterprise-class machines.
UEFI Weaknesses.
UEFI toolkits are evolving rapidly, but UEFI has weaknesses:
assume user is an ally
trust TPM implicitly, and attached to computer
hibernate file is unprotected (disk encryption protects against this)
protection migrating from hardware to firmware
delays in patching and whitelist updates
will UEFI really be adopted by the mainstream (smartphone hardware support, bank support, apathetic consumer support)
You Can't Buy Security: Building the Open Source InfoSec Program
Boris Sverdlik,
ISDPodcast.com
co-host
Boris talked about problems typical with current security audits.
"IT Security" is an oxymoron—IT exists to enable buiness, uptime, utilization, reporting, but don't care about security—IT has conflict of interest.
There's no Magic Bullet ("blinky box"), no one-size-fits-all solution (e.g., Intrusion Detection Systems (IDSs)).
Regulations don't make you secure.
The cloud is not secure (because of shared data and admin access).
Defense and pen testing is not sexy.
Auditors are not solution (security not a checklist)—what's needed is experience and adaptability—need soft skills.
Step 1: First thing is to
Google and learn the company end-to-end before you start.
Get to know the management team (not IT team), meet as many people as you can.
Don't use arbitrary values such as CISSP scores. Quantitive risk assessment is a myth (e.g. AV*EF-SLE).
Learn different Business Units, legal/regulatory obligations,
learn the business and where the money is made,
verify company is protected from script kiddies (easy),
learn sensitive information (IP, internal use only),
and
start with low-hanging fruit (customer service reps and social engineering).
Step 2:
Policies. Keep policies short and relevant.
Generic SANS "security" boilerplate policies don't make sense and are not followed.
Focus on acceptable use, data usage, communications, physical security.
Step 3: Implementation: keep it simple stupid.
Open source, although useful, is not free (implementation cost).
Access controls
with authentication & authorization for local and remote access.
MS Windows has it, otherwise use OpenLDAP, OpenIAM, etc.
Application security
Everyone tries to reinvent the wheel—use existing static analysis tools.
Review high-risk apps and major revisions.
Don't run different risk level apps on same system.
Assume host/client compromised and use app-level security control.
Network security
VLAN != segregated because there's too many workarounds.
Use explicit firwall rules,
active and passive network monitoring (snort is free),
disallow end user access to production environment,
have a proxy instead of direct Internet access.
Also, SSL certificates are not good two-factor auth
and SSL does not mean "safe."
Operational Controls
Have change, patch, asset, & vulnerability management
(OSSI is free).
For change management, always review code before pushing to production
For logging, have centralized security logging for business-critical systems,
separate security logging from administrative/IT logging, and
lock down log (as it has everything).
Monitor with OSSIM (open source).
Use intrusion detection, but not just to fulfill a checkbox:
build rules from a whitelist perspective (snort).
OSSEC has 95% of what you need.
Vulnerability management is a QA function when done right:
OpenVas and Seccubus are free.
Security awareness
The reality is users will always click everything.
Build real awareness, not compliance driven checkbox,
and have it integrated into the culture.
Pen test by crowd sourcing—test with logging
COSSP
http://www.cossp.org/
- Comprehensive Open Source Security Project
What Journalists Want: The Investigative Reporters' Perspective on Hacking
Dave Maas, San Diego CityBeat
Jason Leopold, Truthout.org
The difference between hackers and investigative journalists:
For hackers, the motivation varies, but method is same, technological specialties.
For investigative journalists, it's about one thing—The Story, and they need broad info-gathering skills.
J-School in 60 Seconds:
Generic formula: Person or issue of pubic interest, new info, or angle.
Generic criteria: proximity, prominence, timeliness, human interest, oddity, or consequence.
Media awareness of hackers and trends:
journalists becoming extremely aware of hackers with
congressional debates (privacy, data breaches),
demand for data-mining Journalists,
use of coding and web development for Journalists,
and
Journalists busted for hacking (Murdock).
Info gathering by investigative journalists include
Public records laws. Federal Freedom of Information Act (FOIA) is good, but slow.
California Public Records Act is a lot stronger.
FOIA takes forever because of foot-dragging—it helps to be specific.
Often need to sue (especially FBI).
CPRA is faster, and requests can be vague.
Dumps and leaks (a la Wikileaks)
Journalists want:
leads,
protecting ourselves, our sources,
and
adapting tools for news gathering (Google hacking).
Anonomity is important to whistleblowers.
They want no digital footprint left behind (e.g., email, web log).
They don't trust encryption, want to feel safe and secure.
Whistleblower laws are very weak—there's no upside for whistleblowers—they have to be very passionate to do it.
Accessibility and Security or:
How I Learned to Stop Worrying and Love the Halting Problem
Anna Shubina, Dartmouth College
Anna talked about how accessibility and security are related.
Accessibility of digital content (not real world accessibility).
mostly refers to blind users and screenreaders, for our purpose.
Accessibility is about parsing documents, as are many security issues.
"Rich" executable content causes accessibility to fail, and often causes security to fail.
For example MS Word has executable format—it's not a document exchange format—more dangerous than PDF or HTML.
Accessibility is often the first and maybe only sanity check with parsing.
They have no choice because someone may want to read what you write.
Google, for example, is very particular about web browser you use and are bad at supporting other browsers. Uses JavaScript instead of links, often requiring mouseover to display content.
PDF is a security nightmare. Executible format, embedded flash, JavaScript, etc. 15 million lines of code.
Google Chrome doesn't handle PDF correctly, causing several security bugs.
PDF has an accessibility checker and PDF tagging, to help with accessibility.
But no PDF checker checks for incorrect tags, untagged content, or validates lists or tables. None check executable content at all.
The "Halting Problem" is: can one decide whether a program will ever stop?
The answer, in general, is no (Rice's theorem).
The same holds true for accessibility checkers.
Language-theoretic Security says complicated data formats are hard to parse and cannot be solved due to the Halting Problem.
W3C Web Accessibility Guidelines: "Perceivable, Operable, Understandable, Robust"
Not much help though, except for "Robust", but here's some gems:
* all information should be parsable (paraphrasing)
* if not parsable, cannot be converted to alternate formats
* maximize compatibility in new document formats
Executible webpages are bad for security and accessibility. They say it's for a better web experience. But is it necessary to stuff web pages with JavaScript for a better experience?
A good example is The Drudge Report—it has hand-written HTML with no JavaScript, yet drives a lot of web traffic due to good content.
A bad example is Google News—hidden scrollbars, guessing user input.
Solutions:
Accessibility and security problems come from same source
Expose "better user experience" myth
Keep your corner of Internet parsable
Remember "Halting Problem"—recognize false solutions (checking and verifying tools)
Stop Patching, for Stronger PCI Compliance
Adam Brand,
protiviti
@adamrbrand,
http://www.picfun.com/
Adam talked about PCI compliance for retail sales.
Take an example: for PCI compliance, 50% of Brian's time (a IT guy), 960 hours/year was spent patching POSs in 850 restaurants.
Often applying some patches make no sense (like fixing a browser vulnerability on a server).
"Scanner worship" is overuse of vulnerability scanners—it gives a warm and fuzzy and it's simple (red or green results—fix reds).
Scanners give a false sense of security.
In reality, breeches from missing patches are uncommon—more common problems are: default passwords, cleartext authentication, misconfiguration (firewall ports open).
Patching Myths:
Myth 1: install within 30 days of patch release (but PCI §6.1 allows a "risk-based approach" instead).
Myth 2: vendor decides what's critical (also PCI §6.1). But §6.2 requires user ranking of vulnerabilities instead.
Myth 3: scan and rescan until it passes. But PCI §11.2.1b says this applies only to high-risk vulnerabilities.
Adam says good recommendations come from NIST 800-40.
Instead use sane patching and focus on what's really important. From NIST 800-40:
Proactive: Use a proactive vulnerability management process: use change control, configuration management, monitor file integrity.
Monitor: start with NVD and other vulnerability alerts, not scanner results.
Evaluate: public-facing system? workstation? internal server? (risk rank)
Decide:on action and timeline
Test: pre-test patches (stability, functionality, rollback) for change control
Install: notify, change control, tickets
McAfee Secure & Trustmarks — a Hacker's Best Friend
Jay James, Shane MacDougall, Tactical Intelligence Inc., Canada
"McAfee Secure Trustmark" is a website seal marketed by McAfee.
A website gets this badge if they pass their remote scanning.
The problem is a removal of trustmarks act as flags that you're vulnerable.
Easy to view status change by viewing McAfee list on website or on Google.
"Secure TrustGuard" is similar to McAfee.
Jay and Shane wrote Perl scripts to gather sites from McAfee and search engines.
If their certification image changes to a 1x1 pixel image, then they are longer certified.
Their scripts take deltas of scans to see what changed daily.
The bottom line is change in TrustGuard status is a flag for hackers to attack your site.
Entire idea of seals is silly—you're raising a flag saying if you're vulnerable.