Search Results

Search found 725 results on 29 pages for 'privacy'.

Page 9/29 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Best anti boss tricks to hide your private page navigation from your desktop.

    - by systempuntoout
    This question is slightly related to programming and it's kinda lame, i know; but i saw many funny things in these years and i'm looking for new tricks from you. I'm talking about methods to fast-hide\camouflage not job related web pages on your desktop when boss arrives like a ghost\ninja behind your shoulders. I know how much can be frustrating, programming hard for ten hours and then been caught by your boss watching XKCD during a 2 minutes break. I think the most common anti boss trick is the evergreen CTRL+TAB, but you have to be fast and your left hand has to be near the keyboard. I saw pitch black brightness on Lcd (how can you pretend to program on that?) or custom sized browser to fit a little space just below the IDE. My favourite one at the moment is using fire gesture plugin with FF; with a micro gesture you can hide FF to your tray in a blink of an eye. Do you have any trick to share?

    Read the article

  • Open-sourcing a web site with active users?

    - by Lars Yencken
    I currently run several research-related web-sites with active users, and these sites use some personally identifying information about these users (their email address, IP address, and query history). Ideally I'd release the code to these sites as open source, so that other people could easily run similar sites, and more importantly scrutinise and replicate my work, but I haven't been comfortable doing so, since I'm unsure of the security implications. For example, I wouldn't want my users' details to be accessed or distributed by a third party who found some flaw in my site, something which might be easy to do with full source access. I've tried going half-way by refactoring the (Django) site into more independent modules, and releasing those, but this is very time consuming, and in practice I've never gotten around to releasing enough that a third party can replicate the site(s) easily. I also feel that maybe I'm kidding myself, and that this process is really no different to releasing the full source. What would you recommend in cases like this? Would you open-source the site and take the risk? As an alternative, would you advertise the source as "available upon request" to other researchers, so that you at least know who has the code? Or would you just apologise to them and keep it closed in order to protect users?

    Read the article

  • Cookie blocked/not saved in IFRAME in Internet Explorer

    - by Piskvor
    I have two websites, let's say they're example.com and anotherexample.net. On anotherexample.net/page.html, I have an IFRAME SRC="http://example.com/someform.asp". That IFRAME displays a form for the user to fill out and submit to http://example.com/process.asp. When I open the form ("someform.asp") in its own browser window, all works well. However, when I load someform.asp as an IFRAME in IE 6 or IE 7, the cookies for example.com are not saved. In Firefox this problem doesn't appear. For testing purposes, I've created a similar setup on http://newmoon.wz.cz/test/page.php . example.com uses cookie-based sessions (and there's nothing I can do about that), so without cookies, process.asp won't execute. How do I force IE to save those cookies? Results of sniffing the HTTP traffic: on GET /someform.asp response, there's a valid per-session Set-Cookie header (e.g. Set-Cookie: ASPKSJIUIUGF=JKHJUHVGFYTTYFY), but on POST /process.asp request, there is no Cookie header at all. Edit3: some AJAX+serverside scripting is apparently capable to sidestep the problem, but that looks very much like a bug, plus it opens a whole new set of security holes. I don't want my applications to use a combination of bug+security hole just because it's easy. Edit: the P3P policy was the root cause, full explanation below.

    Read the article

  • What makes you trust that a piece of open source software is not malicious?

    - by Daniel DiPaolo
    We developers are in a unique position when it comes to the ability to not only be skeptical about the capabilities provided by open source software, but to actively analyze the code since it is freely available. In fact, one may even argue that open source software developers have a social responsibility to do so to contribute to the community. But at what point do you as a developer say, "I better take a look at what this is doing before I trust using it" for any given thing? Is it a matter of trusting code with your personal information? Does it depend on the source you're getting it from? What spurred this question on was a post on Hacker News to a javascript bookmarklet that supposedly tells you how "exposed" your information on Facebook is as well as recommending some fixes. I thought for a second "I'd rather not start blindly running this code over all my (fairly locked down) Facebook information so let me check it out". The bookmarklet is simple enough, but it calls another javascript function which at the time (but not anymore) was highly compressed and undecipherable. That's when I said "nope, not gonna do it". So even though I could have verified the original uncompressed javascript from the Github site and even saved a local copy to verify and then run without hitting their server, I wasn't going to. It's several thousand lines and I'm not a total javascript guru to begin with. Yet, folks are using it anyway. Even (supposedly) bright developers. What makes them trust the script? Did they all scrutinize it line by line? Do they know the guy personally and trust him not to do anything bad? Do they just take his word? What makes you trust that a piece of open source software is not malicious?

    Read the article

  • Finding images on the web

    - by Britt
    I sent someone a photo of me and they replied that this particular photo was all over the web. How do I find out where this photo is and is there any way that I can see if there are other photos of myself that someoe has shared without my knowledge? I am very worried about this and want to find out where these pictures are please help me!

    Read the article

  • how can cookies track users despite same origin policy?

    - by user1763930
    Article here discusses tactics used by political campaigns. http://www.nytimes.com/2012/10/14/us/politics/campaigns-mine-personal-lives-to-get-out-vote.html The part in question is quoted: The campaigns have planted software known as cookies on voters’ computers to see if they frequent evangelical or erotic Web sites for clues to their moral perspectives. Voters who visit religious Web sites might be greeted with religion-friendly messages when they return to mittromney.com or barackobama.com. How is that possible? I thought all modern browsers have same origin policy security where website A doesn't have access to any information about other website B, website C, etc. The article makes it sound like a user browses: 1. presidentialcandidate.com 2. website2.com 3. website3.com 4. website4.com 5. presidentialcandidate.com How can a cookie from visit #1 know track user history and be revealed in visit #5?

    Read the article

  • How can I keep my privacy while owning multiple domain names?

    - by Abby
    I want to own, create and run x# of domains. I do not want 'whois' to have my name, home address and home phone available to anyone who looks me up. I've already bought a mailbox that I can use for my physical address. But... that doesn't get my name and number question answered. What is the best way to be anonymous yet still be legal? Do I need to incorporate all my sites and get an LLC? Can I create a company name without becoming an LLC? Then there's the phone number.... Thanks in advance to all who respond!

    Read the article

  • What are good options for hosting video that give you privacy and control (not youtube or vimeo)?

    - by Rezen
    I have used http://www.longtailvideo.com/bits-on-the-run,http://www.influxis.com/, wistia for video hosting. Wistia didn't allow the finer control that we wanted to have. Influxis doesn't have the features that Bits on the Run has but platform usage for BOTR gets expensive. I was thinking of moving the videos to Amazon CDN. What are your thoughts and experiences with video hosting and are there any recommendations? Videos will be privately streamed to 100's of doctors offices.

    Read the article

  • White-box testing in Javascript - how to deal with privacy?

    - by Max Shawabkeh
    I'm writing unit tests for a module in a small Javascript application. In order to keep the interface clean, some of the implementation details are closed over by an anonymous function (the usual JS pattern for privacy). However, while testing I need to access/mock/verify the private parts. Most of the tests I've written previously have been in Python, where there are no real private variables (members, identifiers, whatever you want to call them). One simply suggests privacy via a leading underscore for the users, and freely ignores it while testing the code. In statically typed OO languages I suppose one could make private members accessible to tests by converting them to be protected and subclassing the object to be tested. In Javascript, the latter doesn't apply, while the former seems like bad practice. I could always wall back to black box testing and simply check the final results. It's the simplest and cleanest approach, but unfortunately not really detailed enough for my needs. So, is there a standard way of keeping variables private while still retaining some backdoors for testing in Javascript?

    Read the article

  • Toorcon14

    - by danx
    Toorcon 2012 Information Security Conference San Diego, CA, http://www.toorcon.org/ Dan Anderson, October 2012 It's almost Halloween, and we all know what that means—yes, of course, it's time for another Toorcon Conference! Toorcon is an annual conference for people interested in computer security. This includes the whole range of hackers, computer hobbyists, professionals, security consultants, press, law enforcement, prosecutors, FBI, etc. We're at Toorcon 14—see earlier blogs for some of the previous Toorcon's I've attended (back to 2003). This year's "con" was held at the Westin on Broadway in downtown San Diego, California. The following are not necessarily my views—I'm just the messenger—although I could have misquoted or misparaphrased the speakers. Also, I only reviewed some of the talks, below, which I attended and interested me. MalAndroid—the Crux of Android Infections, Aditya K. Sood Programming Weird Machines with ELF Metadata, Rebecca "bx" Shapiro Privacy at the Handset: New FCC Rules?, Valkyrie Hacking Measured Boot and UEFI, Dan Griffin You Can't Buy Security: Building the Open Source InfoSec Program, Boris Sverdlik What Journalists Want: The Investigative Reporters' Perspective on Hacking, Dave Maas & Jason Leopold Accessibility and Security, Anna Shubina Stop Patching, for Stronger PCI Compliance, Adam Brand McAfee Secure & Trustmarks — a Hacker's Best Friend, Jay James & Shane MacDougall MalAndroid—the Crux of Android Infections Aditya K. Sood, IOActive, Michigan State PhD candidate Aditya talked about Android smartphone malware. There's a lot of old Android software out there—over 50% Gingerbread (2.3.x)—and most have unpatched vulnerabilities. Of 9 Android vulnerabilities, 8 have known exploits (such as the old Gingerbread Global Object Table exploit). Android protection includes sandboxing, security scanner, app permissions, and screened Android app market. The Android permission checker has fine-grain resource control, policy enforcement. Android static analysis also includes a static analysis app checker (bouncer), and a vulnerablity checker. What security problems does Android have? User-centric security, which depends on the user to grant permission and make smart decisions. But users don't care or think about malware (the're not aware, not paranoid). All they want is functionality, extensibility, mobility Android had no "proper" encryption before Android 3.0 No built-in protection against social engineering and web tricks Alternative Android app markets are unsafe. Simply visiting some markets can infect Android Aditya classified Android Malware types as: Type A—Apps. These interact with the Android app framework. For example, a fake Netflix app. Or Android Gold Dream (game), which uploads user files stealthy manner to a remote location. Type K—Kernel. Exploits underlying Linux libraries or kernel Type H—Hybrid. These use multiple layers (app framework, libraries, kernel). These are most commonly used by Android botnets, which are popular with Chinese botnet authors What are the threats from Android malware? These incude leak info (contacts), banking fraud, corporate network attacks, malware advertising, malware "Hackivism" (the promotion of social causes. For example, promiting specific leaders of the Tunisian or Iranian revolutions. Android malware is frequently "masquerated". That is, repackaged inside a legit app with malware. To avoid detection, the hidden malware is not unwrapped until runtime. The malware payload can be hidden in, for example, PNG files. Less common are Android bootkits—there's not many around. What they do is hijack the Android init framework—alteering system programs and daemons, then deletes itself. For example, the DKF Bootkit (China). Android App Problems: no code signing! all self-signed native code execution permission sandbox — all or none alternate market places no robust Android malware detection at network level delayed patch process Programming Weird Machines with ELF Metadata Rebecca "bx" Shapiro, Dartmouth College, NH https://github.com/bx/elf-bf-tools @bxsays on twitter Definitions. "ELF" is an executable file format used in linking and loading executables (on UNIX/Linux-class machines). "Weird machine" uses undocumented computation sources (I think of them as unintended virtual machines). Some examples of "weird machines" are those that: return to weird location, does SQL injection, corrupts the heap. Bx then talked about using ELF metadata as (an uintended) "weird machine". Some ELF background: A compiler takes source code and generates a ELF object file (hello.o). A static linker makes an ELF executable from the object file. A runtime linker and loader takes ELF executable and loads and relocates it in memory. The ELF file has symbols to relocate functions and variables. ELF has two relocation tables—one at link time and another one at loading time: .rela.dyn (link time) and .dynsym (dynamic table). GOT: Global Offset Table of addresses for dynamically-linked functions. PLT: Procedure Linkage Tables—works with GOT. The memory layout of a process (not the ELF file) is, in order: program (+ heap), dynamic libraries, libc, ld.so, stack (which includes the dynamic table loaded into memory) For ELF, the "weird machine" is found and exploited in the loader. ELF can be crafted for executing viruses, by tricking runtime into executing interpreted "code" in the ELF symbol table. One can inject parasitic "code" without modifying the actual ELF code portions. Think of the ELF symbol table as an "assembly language" interpreter. It has these elements: instructions: Add, move, jump if not 0 (jnz) Think of symbol table entries as "registers" symbol table value is "contents" immediate values are constants direct values are addresses (e.g., 0xdeadbeef) move instruction: is a relocation table entry add instruction: relocation table "addend" entry jnz instruction: takes multiple relocation table entries The ELF weird machine exploits the loader by relocating relocation table entries. The loader will go on forever until told to stop. It stores state on stack at "end" and uses IFUNC table entries (containing function pointer address). The ELF weird machine, called "Brainfu*k" (BF) has: 8 instructions: pointer inc, dec, inc indirect, dec indirect, jump forward, jump backward, print. Three registers - 3 registers Bx showed example BF source code that implemented a Turing machine printing "hello, world". More interesting was the next demo, where bx modified ping. Ping runs suid as root, but quickly drops privilege. BF modified the loader to disable the library function call dropping privilege, so it remained as root. Then BF modified the ping -t argument to execute the -t filename as root. It's best to show what this modified ping does with an example: $ whoami bx $ ping localhost -t backdoor.sh # executes backdoor $ whoami root $ The modified code increased from 285948 bytes to 290209 bytes. A BF tool compiles "executable" by modifying the symbol table in an existing ELF executable. The tool modifies .dynsym and .rela.dyn table, but not code or data. Privacy at the Handset: New FCC Rules? "Valkyrie" (Christie Dudley, Santa Clara Law JD candidate) Valkyrie talked about mobile handset privacy. Some background: Senator Franken (also a comedian) became alarmed about CarrierIQ, where the carriers track their customers. Franken asked the FCC to find out what obligations carriers think they have to protect privacy. The carriers' response was that they are doing just fine with self-regulation—no worries! Carriers need to collect data, such as missed calls, to maintain network quality. But carriers also sell data for marketing. Verizon sells customer data and enables this with a narrow privacy policy (only 1 month to opt out, with difficulties). The data sold is not individually identifiable and is aggregated. But Verizon recommends, as an aggregation workaround to "recollate" data to other databases to identify customers indirectly. The FCC has regulated telephone privacy since 1934 and mobile network privacy since 2007. Also, the carriers say mobile phone privacy is a FTC responsibility (not FCC). FTC is trying to improve mobile app privacy, but FTC has no authority over carrier / customer relationships. As a side note, Apple iPhones are unique as carriers have extra control over iPhones they don't have with other smartphones. As a result iPhones may be more regulated. Who are the consumer advocates? Everyone knows EFF, but EPIC (Electrnic Privacy Info Center), although more obsecure, is more relevant. What to do? Carriers must be accountable. Opt-in and opt-out at any time. Carriers need incentive to grant users control for those who want it, by holding them liable and responsible for breeches on their clock. Location information should be added current CPNI privacy protection, and require "Pen/trap" judicial order to obtain (and would still be a lower standard than 4th Amendment). Politics are on a pro-privacy swing now, with many senators and the Whitehouse. There will probably be new regulation soon, and enforcement will be a problem, but consumers will still have some benefit. Hacking Measured Boot and UEFI Dan Griffin, JWSecure, Inc., Seattle, @JWSdan Dan talked about hacking measured UEFI boot. First some terms: UEFI is a boot technology that is replacing BIOS (has whitelisting and blacklisting). UEFI protects devices against rootkits. TPM - hardware security device to store hashs and hardware-protected keys "secure boot" can control at firmware level what boot images can boot "measured boot" OS feature that tracks hashes (from BIOS, boot loader, krnel, early drivers). "remote attestation" allows remote validation and control based on policy on a remote attestation server. Microsoft pushing TPM (Windows 8 required), but Google is not. Intel TianoCore is the only open source for UEFI. Dan has Measured Boot Tool at http://mbt.codeplex.com/ with a demo where you can also view TPM data. TPM support already on enterprise-class machines. UEFI Weaknesses. UEFI toolkits are evolving rapidly, but UEFI has weaknesses: assume user is an ally trust TPM implicitly, and attached to computer hibernate file is unprotected (disk encryption protects against this) protection migrating from hardware to firmware delays in patching and whitelist updates will UEFI really be adopted by the mainstream (smartphone hardware support, bank support, apathetic consumer support) You Can't Buy Security: Building the Open Source InfoSec Program Boris Sverdlik, ISDPodcast.com co-host Boris talked about problems typical with current security audits. "IT Security" is an oxymoron—IT exists to enable buiness, uptime, utilization, reporting, but don't care about security—IT has conflict of interest. There's no Magic Bullet ("blinky box"), no one-size-fits-all solution (e.g., Intrusion Detection Systems (IDSs)). Regulations don't make you secure. The cloud is not secure (because of shared data and admin access). Defense and pen testing is not sexy. Auditors are not solution (security not a checklist)—what's needed is experience and adaptability—need soft skills. Step 1: First thing is to Google and learn the company end-to-end before you start. Get to know the management team (not IT team), meet as many people as you can. Don't use arbitrary values such as CISSP scores. Quantitive risk assessment is a myth (e.g. AV*EF-SLE). Learn different Business Units, legal/regulatory obligations, learn the business and where the money is made, verify company is protected from script kiddies (easy), learn sensitive information (IP, internal use only), and start with low-hanging fruit (customer service reps and social engineering). Step 2: Policies. Keep policies short and relevant. Generic SANS "security" boilerplate policies don't make sense and are not followed. Focus on acceptable use, data usage, communications, physical security. Step 3: Implementation: keep it simple stupid. Open source, although useful, is not free (implementation cost). Access controls with authentication & authorization for local and remote access. MS Windows has it, otherwise use OpenLDAP, OpenIAM, etc. Application security Everyone tries to reinvent the wheel—use existing static analysis tools. Review high-risk apps and major revisions. Don't run different risk level apps on same system. Assume host/client compromised and use app-level security control. Network security VLAN != segregated because there's too many workarounds. Use explicit firwall rules, active and passive network monitoring (snort is free), disallow end user access to production environment, have a proxy instead of direct Internet access. Also, SSL certificates are not good two-factor auth and SSL does not mean "safe." Operational Controls Have change, patch, asset, & vulnerability management (OSSI is free). For change management, always review code before pushing to production For logging, have centralized security logging for business-critical systems, separate security logging from administrative/IT logging, and lock down log (as it has everything). Monitor with OSSIM (open source). Use intrusion detection, but not just to fulfill a checkbox: build rules from a whitelist perspective (snort). OSSEC has 95% of what you need. Vulnerability management is a QA function when done right: OpenVas and Seccubus are free. Security awareness The reality is users will always click everything. Build real awareness, not compliance driven checkbox, and have it integrated into the culture. Pen test by crowd sourcing—test with logging COSSP http://www.cossp.org/ - Comprehensive Open Source Security Project What Journalists Want: The Investigative Reporters' Perspective on Hacking Dave Maas, San Diego CityBeat Jason Leopold, Truthout.org The difference between hackers and investigative journalists: For hackers, the motivation varies, but method is same, technological specialties. For investigative journalists, it's about one thing—The Story, and they need broad info-gathering skills. J-School in 60 Seconds: Generic formula: Person or issue of pubic interest, new info, or angle. Generic criteria: proximity, prominence, timeliness, human interest, oddity, or consequence. Media awareness of hackers and trends: journalists becoming extremely aware of hackers with congressional debates (privacy, data breaches), demand for data-mining Journalists, use of coding and web development for Journalists, and Journalists busted for hacking (Murdock). Info gathering by investigative journalists include Public records laws. Federal Freedom of Information Act (FOIA) is good, but slow. California Public Records Act is a lot stronger. FOIA takes forever because of foot-dragging—it helps to be specific. Often need to sue (especially FBI). CPRA is faster, and requests can be vague. Dumps and leaks (a la Wikileaks) Journalists want: leads, protecting ourselves, our sources, and adapting tools for news gathering (Google hacking). Anonomity is important to whistleblowers. They want no digital footprint left behind (e.g., email, web log). They don't trust encryption, want to feel safe and secure. Whistleblower laws are very weak—there's no upside for whistleblowers—they have to be very passionate to do it. Accessibility and Security or: How I Learned to Stop Worrying and Love the Halting Problem Anna Shubina, Dartmouth College Anna talked about how accessibility and security are related. Accessibility of digital content (not real world accessibility). mostly refers to blind users and screenreaders, for our purpose. Accessibility is about parsing documents, as are many security issues. "Rich" executable content causes accessibility to fail, and often causes security to fail. For example MS Word has executable format—it's not a document exchange format—more dangerous than PDF or HTML. Accessibility is often the first and maybe only sanity check with parsing. They have no choice because someone may want to read what you write. Google, for example, is very particular about web browser you use and are bad at supporting other browsers. Uses JavaScript instead of links, often requiring mouseover to display content. PDF is a security nightmare. Executible format, embedded flash, JavaScript, etc. 15 million lines of code. Google Chrome doesn't handle PDF correctly, causing several security bugs. PDF has an accessibility checker and PDF tagging, to help with accessibility. But no PDF checker checks for incorrect tags, untagged content, or validates lists or tables. None check executable content at all. The "Halting Problem" is: can one decide whether a program will ever stop? The answer, in general, is no (Rice's theorem). The same holds true for accessibility checkers. Language-theoretic Security says complicated data formats are hard to parse and cannot be solved due to the Halting Problem. W3C Web Accessibility Guidelines: "Perceivable, Operable, Understandable, Robust" Not much help though, except for "Robust", but here's some gems: * all information should be parsable (paraphrasing) * if not parsable, cannot be converted to alternate formats * maximize compatibility in new document formats Executible webpages are bad for security and accessibility. They say it's for a better web experience. But is it necessary to stuff web pages with JavaScript for a better experience? A good example is The Drudge Report—it has hand-written HTML with no JavaScript, yet drives a lot of web traffic due to good content. A bad example is Google News—hidden scrollbars, guessing user input. Solutions: Accessibility and security problems come from same source Expose "better user experience" myth Keep your corner of Internet parsable Remember "Halting Problem"—recognize false solutions (checking and verifying tools) Stop Patching, for Stronger PCI Compliance Adam Brand, protiviti @adamrbrand, http://www.picfun.com/ Adam talked about PCI compliance for retail sales. Take an example: for PCI compliance, 50% of Brian's time (a IT guy), 960 hours/year was spent patching POSs in 850 restaurants. Often applying some patches make no sense (like fixing a browser vulnerability on a server). "Scanner worship" is overuse of vulnerability scanners—it gives a warm and fuzzy and it's simple (red or green results—fix reds). Scanners give a false sense of security. In reality, breeches from missing patches are uncommon—more common problems are: default passwords, cleartext authentication, misconfiguration (firewall ports open). Patching Myths: Myth 1: install within 30 days of patch release (but PCI §6.1 allows a "risk-based approach" instead). Myth 2: vendor decides what's critical (also PCI §6.1). But §6.2 requires user ranking of vulnerabilities instead. Myth 3: scan and rescan until it passes. But PCI §11.2.1b says this applies only to high-risk vulnerabilities. Adam says good recommendations come from NIST 800-40. Instead use sane patching and focus on what's really important. From NIST 800-40: Proactive: Use a proactive vulnerability management process: use change control, configuration management, monitor file integrity. Monitor: start with NVD and other vulnerability alerts, not scanner results. Evaluate: public-facing system? workstation? internal server? (risk rank) Decide:on action and timeline Test: pre-test patches (stability, functionality, rollback) for change control Install: notify, change control, tickets McAfee Secure & Trustmarks — a Hacker's Best Friend Jay James, Shane MacDougall, Tactical Intelligence Inc., Canada "McAfee Secure Trustmark" is a website seal marketed by McAfee. A website gets this badge if they pass their remote scanning. The problem is a removal of trustmarks act as flags that you're vulnerable. Easy to view status change by viewing McAfee list on website or on Google. "Secure TrustGuard" is similar to McAfee. Jay and Shane wrote Perl scripts to gather sites from McAfee and search engines. If their certification image changes to a 1x1 pixel image, then they are longer certified. Their scripts take deltas of scans to see what changed daily. The bottom line is change in TrustGuard status is a flag for hackers to attack your site. Entire idea of seals is silly—you're raising a flag saying if you're vulnerable.

    Read the article

  • How should I choose my DNS?

    - by Jader Dias
    When I have to choose my DNS I think that I should consider: Speed Reliability Privacy Control (reports and stats) The main options that come to my mind, and how I weigh them according to the above factors, are: My ISP = faster (closer to me) but less privacy (they can associate my DNS requests to myself) OpenDNS and such = more control and more privacy (all they have is one of my e-mail addresses) Google = less privacy (they can associate my DNS requests to my Google Account and my searches) What weighting factors, or other options, have I missed?

    Read the article

  • How to use CFUUID, can CFUUID be traced back to a unique individual. security/privacy

    - by Kevin
    Hi, i an new to iphone Dev and the concept of CFUUID, so thought i should ask, before i start implementing it. so the string returned by CFUUID is it really unique or can it be traced back to a unique individual. meaning lets say, i generate a CFUUID object and convert it to string(using the methods provided) , and then this info is used in my app or stored on a server database. and how unique is it, i mean is their a chance it can be similar to one generated on some other device. is it a good idea to use this info freely or are their some security/privacy aspects that i am not thinking about here. any help is greatly appreciated thanks

    Read the article

  • How should I build a privacy drop-down (select) menu?

    - by animuson
    I'm trying to build something similar to Facebook's privacy selection menu, except without the 'custom' option. It will only list a few options such as 'show to all', 'show to friends only', or 'completely hidden'. Right now I'm thinking of using simple JavaScript to change a hidden input field to the new value they click on, so if they clicked on the division for 'show to friends only' it would change the corresponding field, say 'email_privacy', to 1. Is there a better way to do this or am I pretty much on track? P.S. I am not planning on using a select element, I was planning on building a custom drop-down menu using CSS since select elements are so highly non-customizable. I'm doing it this way to save space, rather than having this massive selection menu at the right which takes up a bunch of space. Note: I'm not really interested in using jQuery, that's just extra libraries and crap that I don't want to load. I can do it in JavaScript just as easily so I might as well use that.

    Read the article

  • How to protect folder privacy against unethical network administrators? [closed]

    - by Trevor Trovalds
    I just need a technical solution for the sake of my group's shared passwords, projects, works, etc. safety. Our network has Active Directory with public/groups/users and NTFS permissions, under a Windows Server 2003 which will soon migrate to Windows Server 2008 R2. Our IT crowd is small, consisting of 2 DBAs, 4 designers, 6 developers (including me), 2 netadmins and (a lot of) tech supporters, everyone has local admin rights. Those 2 network admins weren't the ones who set the network up, they just took the lift recently when the previous ones quit. We usually find them laughing at private contents from users stored in the groups AD, sabotaging documents that don't match their personal tastes and, finally, this week we found out they stole a project we (developers and DBAs) were finishing and, long before, they presented it to the CEO as theirs without us knowing. I'm a systems analyst, and initially my group decided to store critical content, like shared passwords, inside encrypted .zip files. Unfortunately we couldn't do the same to the other hundreds of folders and files, which included the stolen project, because the zipping process would take too long for every update. We also tried an encrypted Subversion repository under SSL, but there are many dummies (~38 atm) involved in the projects that have trouble using TortoiseSVN when contributing, and very oftenly we had to fix messed up updates. Well, I think these two give the idea of what we've been trying to reach. So, is there a practical "individual" protection for our extensive data or my hope can already be euthanized? P.S.: Seriously, at the place where I live/work, political corruption gone the wildest, so denounce related options are likely impracticable. Yet both netadmins have strong "political bond" with the CEO and the President, hence their lousy behavior and our failed delation attempts.

    Read the article

  • Where can I learn about security and online privacy?

    - by user278457
    I'd really like to start including shopping cart functionality in my projects. At first im content relying on paypal links, but I really want to be learning about specific security threats and how to combat them. Eventually I want to feel comfortable receiving and sending customer credit card details for ecommerce. Obviously this is a common thing on the net but most tutorials and resources are content to say "it's every web developers responsibility to consider security, but we're not going to cover that here/today/ever." so, my question is, where is a good place to learn? And once I've learned, how do I stay abreast of new vulnerabilities as the web evolves?

    Read the article

  • privacy, c++, firefox... big bug!!!

    - by Delirium tremens
    How to reproduce: open Firefox visit a good TGP click History click Show All History select the name of the good TGP you already know Delete This Page, but there is an other feature, a super secret feature, click Forget All About This Page --- if you had cookies, cache, active logins etc that came from the good TGP, it's correctly deleted, because it's a different feature from delete this page visit TWO good TGPs click History click Show All History select the names of the TWO good TGPs --- where is Forget All About These Pages??? That is the bug... It used to be all-or-nothing, but now... now??? oh, now there's a bug and it's still all-or-nothing.

    Read the article

  • Static IPv6 address in Windows unused for outgoing connections

    - by Luc
    I'm running a Windows server and trying to get it to use a static IPv6 address for outgoing connections to other IPv6 hosts (such as Gmail). I need this because Gmail requires a ptr record, and I can't set one for random addresses. The static address is configured on the host, but it also has a temporary privacy address as well as a random address from the router it seems. By default Windows uses the privacy address; it seems this is the expected behavior (and it makes perfect sense for people/users that did not set a static address, but I did!). I've tried disabling the privacy address with: netsh int ipv6 set privacy disabled This indeed gets rid of the privacy address, but I still have the random address that the router assigned. To disable this, it was said I needed to disable "router discovery" using this command: net interface ipv6 set interface 14 routerdiscovery=disabled Upon doing this, all IPv6 connectivity is lost. If I do this while pinging Gmail, it will report "Destination host unreachable" as soon as I enter the command. In the static IPv6 configuration, I did configure the default gateway and prefix length, so I don't see why it's unable to connect. Probably has something to do with the lack of ARP in IPv6 and somehow being unable to resolve the router's MAC, but I wouldn't know how to fix this. Finally I've tried disabling the DHCPv6 lease with these commands: netsh interface ipv6 set interface "IDMZ Team" managedaddress=disabled netsh interface ipv6 set interface "IDMZ Team" otherstateful=disabled Which was to no avail; the host continues to obtain and use the router-assigned IPv6 address. The router is a FritzBox 7340, which shows me all the IPv4 and IPv6 addresses that the host (identified by MAC) utilizes, but I'm unable to change the assigned address. Maybe this could be done over the telnet interface of the router somehow, but again, I wouldn't know how to do this even if it's the way to go. In short, any of the following would probably solve my problem: Change Windows' source address selection behavior. Have Windows not get an address from the router and not generate a privacy address; Have the router hand out a static address and make Windows use that as source address. Recover connectivity after disabling router discovery on Windows. Alternatively I might use some (batch, perl, ...) script to throw away all IPv6 addresses except the desired one, but this feels rather hacky. If it's the only way (or less hacky than another hacky solution), it might be an option though. Thanks!

    Read the article

  • K-12 and Cloud considerations

    - by user736511
    Much like every other Public Sector organization, school districts in the US and Canada are under tremendous pressure to deliver consistent and modern services while operating with reduced budgets, IT personnel shortages, and staff attrition.  Electronic/remote learning and the need for immediate access to resources such as grades, calendars, curricula etc. are straining IT environments that were already burdened with meeting privacy requirements imposed by both regulators and parents/students.  One area viewed as a solution to at least some of the challenges is the use of "Cloud" in education.  Although the concept of "Cloud" is nothing new in education with many providers supplying educational material over the web, school districts defer previously-in-house-hosted services to established commercial vendors to accommodate document sharing, app hosting, and even e-mail.  Doing so, however, does not reduce an important risk, that of privacy.  As always, Cloud implementations are viewed in a skeptical manner because of the perceived reduction in sensitive data management and protection thereof, although with a careful approach and the right tooling, the benefits realized by Clouds can expand to security and privacy.   Oracle's comprehensive approach to data privacy and identity management ensures that the necessary tools are available to support regulations, operational efficiencies and strong security regardless of where the sensitive data is stored - on premise or a Cloud.  Common management tools, role-based access controls, access policy management and engineered systems provided by Oracle can be the foundational pieces on which school districts can build their Cloud implementations without having to worry about security itself. Their biggest challenge, and it is a positive one, is how to best take advantage of Oracle's DB Security and IDM functionality to reduce operational costs while enabling modern applications and data delivery to those who needs access to it. For more information please refer to http://www.oracle.com/us/products/middleware/identity-management/overview/index.html and http://www.oracle.com/us/products/database/security/overview/index.html.

    Read the article

  • What cookies are allowed from Internet Explorer using this option?

    - by kiamlaluno
    When I look at the options used from Internet Explorer for the cookies, I read the following description: Blocca i cookie di terze party privi di una versione compatta dell'informativa sulla privacy. Its translation is roughtly: Blocks the cookies without a compact version of the privacy's informative. I don't get which cookies are blocked. From that description, it seems the cookies should include a compact description of the privacy informative, but I don't get how cookies can contain that information, or what happens with cookies set from sites outside the European Union. What cookies are blocked? The screenshot has been taken from Internet Explorer 9, but that words were used also from previous versions. The settings are the ones shown for the "Privacy" tab; I don't recall if the English version calls that tab the same way. EDIT, English screenshot .

    Read the article

  • The backbone router isn't working properly

    - by user2473588
    I'm building a simple backbone app that have 4 routes: home, about, privacy and terms. But after setting the routes I have 3 problems: The "terms" view isn't rendering; When I refresh the #about or the #privacy page, the home view renders after the #about/#privacy view When I hit the back button the home view never renders. For example, if I'm in the #about page, and I hit the back button to the homepage, the about view stays in the page I don't know what I'm doing wrong about the 1st problem. I think that the 2nd and 3rd problem are related with something missing in the home router, but I don't know what is. Here is my code: HTML <section class="feed"> <script id="homeTemplate" type="text/template"> <div class="home"> </div> </script> <script id="termsTemplate" type="text/template"> <div class="terms"> Bla bla bla bla </div> </script> <script id="privacyTemplate" type="text/template"> <div class="privacy"> Bla bla bla bla </div> </script> <script id="aboutTemplate" type="text/template"> <div class="about"> Bla bla bla bla </div> </script> </section> The views app.HomeListView = Backbone.View.extend({ el: '.feed', initialize: function ( initialbooks ) { this.collection = new app.BookList (initialbooks); this.render(); }, render: function() { this.collection.each(function( item ){ this.renderHome( item ); }, this); }, renderHome: function ( item ) { var bookview = new app.BookView ({ model: item }) this.$el.append( bookview.render().el ); } }); app.BookView = Backbone.View.extend ({ tagName: 'div', className: 'home', template: _.template( $( '#homeTemplate' ).html()), render: function() { this.$el.html(this.template(this.model.toJSON())); return this; } }); app.AboutView = Backbone.View.extend({ tagName: 'div', className: 'about', initialize:function () { this.render(); }, template: _.template( $( '#aboutTemplate' ).html()), render: function () { this.$el.html(this.template()); return this; } }); app.PrivacyView = Backbone.View.extend ({ tagName: 'div', className: 'privacy', initialize: function() { this.render(); }, template: _.template( $('#privacyTemplate').html() ), render: function () { this.$el.html(this.template()); return this; } }); app.TermsView = Backbone.View.extend ({ tagName: 'div', className: 'terms', initialize: function () { this.render(); }, template: _.template ( $( '#termsTemplate' ).html() ), render: function () { this.$el.html(this.template()), return this; } }); And the router: var AppRouter = Backbone.Router.extend({ routes: { '' : 'home', 'about' : 'about', 'privacy' : 'privacy', 'terms' : 'terms' }, home: function () { if (!this.homeListView) { this.homeListView = new app.HomeListView(); }; }, about: function () { if (!this.aboutView) { this.aboutView = new app.AboutView(); }; $('.feed').html(this.aboutView.el); }, privacy: function () { if (!this.privacyView) { this.privacyView = new app.PrivacyView(); }; $('.feed').html(this.privacyView.el); }, terms: function () { if (!this.termsView) { this.termsView = new app.TermsView(); }; $('.feed').html(this.termsView.el); } }) app.Router = new AppRouter(); Backbone.history.start(); I'm missing something but I don't know what. Thanks

    Read the article

  • Tracking users behaviour - with or without Google Analytics

    - by Ilian Iliev
    If I understand correctly the following (point & from GA TOS): PRIVACY . You will not (and will not allow any third party to) use the Service to track or collect personally identifiable information of Internet users, nor will You (or will You allow any third party to) associate any data gathered from Your website(s) (or such third parties' website(s)) with any personally identifying information from any source as part of Your use (or such third parties' use) of the Service. You will have and abide by an appropriate privacy policy and will comply with all applicable laws relating to the collection of information from visitors to Your websites. You must post a privacy policy and that policy must provide notice of your use of a cookie that collects anonymous traffic data. You are not allowed to use custom variables that will identify the visitor(for example website username, e-mail, id etc.) So the question is how can I track a specific user behaviour(for example the actions that every single logged in user do).

    Read the article

  • How easy is it to alter a browser fingerprint?

    - by JFig
    I am researching this question for a possible paper. Given the exploitation of user identities for risk management and market tracking, how easy is it to alter a browser enough to throw off fingerprinting techniques? My current sources are the EFF Panopticlick project- https://www.eff.org/deeplinks/2010/01/primer-information-theory-and-privacy and Peter Eckersly's follow-up presentation at Def Con 18- http://privacy-pc.com/articles/how-safe-is-your-browser-peter-ackersley-on-personally-identifiable-information-basics.html

    Read the article

  • Google? Facebook? Who Do You Trust?

    <b>Datamation:</b> "May 31 is "Quit Facebook Day." Some users are outraged over new privacy policies that made sharing all personal content the default. The company "rolled out a simplified version of its privacy controls today, a single page where users can set who sees what."

    Read the article

  • Visual Studio 2012 setup crashes when trying to install

    - by Shyju
    In my Win7(64 bit) PC, I installed VS 2012 Ultimate Trial version few days back and today i got my msdn subscription of VS2012 Premium. so i uninstalled the Trial and was trying to run the setup exe for VS 2012 and dit is crashing. this is the error details i am seeing. Anybody know how to fix this ? Problem signature: Problem Event Name: BEX Application Name: en_visual_studio_premium_2012_x86_web_installer_920759.exe Application Version: 11.0.50727.1 Application Timestamp: 4fd9f28c Fault Module Name: igdumd32.dll Fault Module Version: 8.15.10.2057 Fault Module Timestamp: 4b5e4895 Exception Offset: 00015216 Exception Code: c0000409 Exception Data: 00000000 OS Version: 6.1.7601.2.1.0.256.48 Locale ID: 1033 Additional Information 1: 1d75 Additional Information 2: 1d7537ede8bee0a1d08a5f0d2036cc52 Additional Information 3: b4a4 Additional Information 4: b4a4e02d592ed99de97ca18a461b34ee Read our privacy statement online: http://go.microsoft.com/fwlink/?linkid=104288&clcid=0x0409 If the online privacy statement is not available, please read our privacy statement offline: C:\Windows\system32\en-US\erofflps.txt

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >