Search Results

Search found 12704 results on 509 pages for 'security'.

Page 22/509 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Data-related security Implementation

    - by devdude
    Using Shiro we have a great security framework embedded in our enterprise application running on GF. You define users, roles, permissions and we can control at any fine-grain level if a user can access the application, a certain page or even click a specific button. Is there a recipe or pattern, that allows on top of that, to restrict a user from seeing certain data ? Sample: You have a customer table for 3 factories (part of one company). An admin user can see all customer records, but the user at the local factory must not see any customer data of other factories (for whatever reason). Te security feature should be part of the role definition. Thanks for any input and ideas

    Read the article

  • User account design and security...

    - by espinet
    Before I begin, I am using Ruby on Rails and the Devise gem for user authentication. Hi, I was doing some research about account security and I found a blog post about the topic awhile ago but I can no longer find it again. I read something about when making a login system you should have 1 model for User, this contains a user's username, encrypted password, and email. You should also have a model for a user's Account. This contains everything else. A User has an Account. I don't know if I'm explaining this correctly since I haven't seen the blog post for several months and I lost my bookmark. Could someone explain how and why I should or shouldn't do this. My application deals with money so I need to cover my bases with security. Thanks.

    Read the article

  • Session ID Rotation - does it enhance security?

    - by dound
    (I think) I understand why session IDs should be rotated when the user logs in - this is one important step to prevent session fixation. However, is there any advantage to randomly/periodically rotating session IDs? This seems to only provide a false sense of security in my opinion. Assuming session IDs are not vulnerable to brute-force guessing and you only transmit the session ID in a cookie (not as part of URLs), then an attacker will have to access your cookie (most likely by snooping on your traffic) to get your session ID. Thus if the attacker gets one session ID, they'll probably be able to sniff the rotated session ID too - and thus randomly rotating has not enhanced security.

    Read the article

  • Spring 3 - Custom Security

    - by Eqbal
    I am in the process of converting a legacy application from proprietary technology to a Spring based web app, leaving the backend system as is. The login service is provided by the backend system through a function call that takes in some parameter (username, password plus some others) and provides an output that includes the authroizations for the user and other properties like firstname, lastname etc. What do I need to do to weave this into Spring 3.0 security module. Looks like I need to provide a custom AuthenticationProvider implementation (is this where I call the backend function?). Do I also need a custom User and UserDetailsService implementation which needs loadUserByName(String userName)? Any pointers on good documentation for this? The reference that came with the download is okay, but doesn't help too much in terms of implementing custom security.

    Read the article

  • .NET Security Part 4

    - by Simon Cooper
    Finally, in this series, I am going to cover some of the security issues that can trip you up when using sandboxed appdomains. DISCLAIMER: I am not a security expert, and this is by no means an exhaustive list. If you actually are writing security-critical code, then get a proper security audit of your code by a professional. The examples below are just illustrations of the sort of things that can go wrong. 1. AppDomainSetup.ApplicationBase The most obvious one is the issue covered in the MSDN documentation on creating a sandbox, in step 3 – the sandboxed appdomain has the same ApplicationBase as the controlling appdomain. So let’s explore what happens when they are the same, and an exception is thrown. In the sandboxed assembly, Sandboxed.dll (IPlugin is an interface in a partially-trusted assembly, with a single MethodToDoThings on it): public class UntrustedPlugin : MarshalByRefObject, IPlugin { // implements IPlugin.MethodToDoThings() public void MethodToDoThings() { throw new EvilException(); } } [Serializable] internal class EvilException : Exception { public override string ToString() { // show we have read access to C:\Windows // read the first 5 directories Console.WriteLine("Pwned! Mwuahahah!"); foreach (var d in Directory.EnumerateDirectories(@"C:\Windows").Take(5)) { Console.WriteLine(d.FullName); } return base.ToString(); } } And in the controlling assembly: // what can possibly go wrong? AppDomainSetup appDomainSetup = new AppDomainSetup { ApplicationBase = AppDomain.CurrentDomain.SetupInformation.ApplicationBase } // only grant permissions to execute // and to read the application base, nothing else PermissionSet restrictedPerms = new PermissionSet(PermissionState.None); restrictedPerms.AddPermission( new SecurityPermission(SecurityPermissionFlag.Execution)); restrictedPerms.AddPermission( new FileIOPermission(FileIOPermissionAccess.Read, appDomainSetup.ApplicationBase); restrictedPerms.AddPermission( new FileIOPermission(FileIOPermissionAccess.pathDiscovery, appDomainSetup.ApplicationBase); // create the sandbox AppDomain sandbox = AppDomain.CreateDomain("Sandbox", null, appDomainSetup, restrictedPerms); // execute UntrustedPlugin in the sandbox // don't crash the application if the sandbox throws an exception IPlugin o = (IPlugin)sandbox.CreateInstanceFromAndUnwrap("Sandboxed.dll", "UntrustedPlugin"); try { o.MethodToDoThings() } catch (Exception e) { Console.WriteLine(e.ToString()); } And the result? Oops. We’ve allowed a class that should be sandboxed to execute code with fully-trusted permissions! How did this happen? Well, the key is the exact meaning of the ApplicationBase property: The application base directory is where the assembly manager begins probing for assemblies. When EvilException is thrown, it propagates from the sandboxed appdomain into the controlling assembly’s appdomain (as it’s marked as Serializable). When the exception is deserialized, the CLR finds and loads the sandboxed dll into the fully-trusted appdomain. Since the controlling appdomain’s ApplicationBase directory contains the sandboxed assembly, the CLR finds and loads the assembly into a full-trust appdomain, and the evil code is executed. So the problem isn’t exactly that the sandboxed appdomain’s ApplicationBase is the same as the controlling appdomain’s, it’s that the sandboxed dll was in such a place that the controlling appdomain could find it as part of the standard assembly resolution mechanism. The sandbox then forced the assembly to load in the controlling appdomain by throwing a serializable exception that propagated outside the sandbox. The easiest fix for this is to keep the sandbox ApplicationBase well away from the ApplicationBase of the controlling appdomain, and don’t allow the sandbox permissions to access the controlling appdomain’s ApplicationBase directory. If you do this, then the sandboxed assembly can’t be accidentally loaded into the fully-trusted appdomain, and the code can’t be executed. If the plugin does try to induce the controlling appdomain to load an assembly it shouldn’t, a SerializationException will be thrown when it tries to load the assembly to deserialize the exception, and no damage will be done. 2. Loading the sandboxed dll into the application appdomain As an extension of the previous point, you shouldn’t directly reference types or methods in the sandboxed dll from your application code. That loads the assembly into the fully-trusted appdomain, and from there code in the assembly could be executed. Instead, pull out methods you want the sandboxed dll to have into an interface or class in a partially-trusted assembly you control, and execute methods via that instead (similar to the example above with the IPlugin interface). If you need to have a look at the assembly before executing it in the sandbox, either examine the assembly using reflection from within the sandbox, or load the assembly into the Reflection-only context in the application’s appdomain. The code in assemblies in the reflection-only context can’t be executed, it can only be reflected upon, thus protecting your appdomain from malicious code. 3. Incorrectly asserting permissions You should only assert permissions when you are absolutely sure they’re safe. For example, this method allows a caller read-access to any file they call this method with, including your documents, any network shares, the C:\Windows directory, etc: [SecuritySafeCritical] public static string GetFileText(string filePath) { new FileIOPermission(FileIOPermissionAccess.Read, filePath).Assert(); return File.ReadAllText(filePath); } Be careful when asserting permissions, and ensure you’re not providing a loophole sandboxed dlls can use to gain access to things they shouldn’t be able to. Conclusion Hopefully, that’s given you an idea of some of the ways it’s possible to get past the .NET security system. As I said before, this post is not exhaustive, and you certainly shouldn’t base any security-critical applications on the contents of this blog post. What this series should help with is understanding the possibilities of the security system, and what all the security attributes and classes mean and what they are used for, if you were to use the security system in the future.

    Read the article

  • Security precautions and techniques for a User-submitted Code Demo Area

    - by Jack W-H
    Hey folks Maybe this isn't really feasible. But basically, I've been developing a snippet-sharing website and I would like it to have a 'live demo area'. For example, you're browsing some snippets and click the Demo button. A new window pops up which executes the web code. I understand there are a gazillion security risks involved in doing this - XSS, tags, nasty malware/drive by downloads, pr0n, etc. etc. etc. The community would be able to flag submissions that are blatantly naughty but obviously some would go undetected (and, in many cases, someone would have to fall victim to discover whatever nasty thing was submitted). So I need to know: What should I do - security wise - to make sure that users can submit code, but that nothing malicious can be run - or executed offsite, etc? For your information my site is powered by PHP using CodeIgniter. Jack

    Read the article

  • TFS Security and Documents Folder

    - by pm_2
    I'm getting an issue with TFS where the documents folder is marked with a red cross. As far as I can tell, this seems to be a security issue, however, I am set-up as project admin on the relevant projects. I’ve come to the conclusion that it’s a security issue from running the TFS Project Admin tool (available here). When I run this, it tells me that I don’t have sufficient access rights to open the project. I’ve checked, and I’m not included in any groups that are denied access. Please can anyone shed any light as to why I may not have sufficient access to these projects?

    Read the article

  • Spring security oauth2 provider to secure non-spring api

    - by user1241320
    I'm trying to set up an oauth 2.0 provider that should "secure" our restful api using spring-security-oauth. Being a 'spring fan' i thought it could be the quicker solution. main point is this restful thingie is not a spring based webapp. boss says the oauth provider should be a separate application, but i'm starting to doubt that. (got this impression by reading spring-security-oauth) i'm also new here so haven't really got my hands into this other (jersey-powered) restul api (core of our business). any help/hint will be much appreciated.

    Read the article

  • GWT HTML widget security risks

    - by h2g2java
    In GWT javadoc, we are advised If you only need a simple label (text, but not HTML), then the Label widget is more appropriate, as it disallows the use of HTML, which can lead to potential security issues if not used properly. I would like to be educated/reminded about the security susceptibilities? It would be nice to list the description of the mechanisms of those risks. Are the susceptibilities equally potent on GAE vs Amazon vs my home linux server? Are they equally potent across the browser brands? Thank you.

    Read the article

  • Control Menu Items based on Privileges of Logged In User with spring security

    - by Nirmal
    Hi All... Based on this link I have incorporated the spring security core module with my grails project... I am using the Requestmap concept by storing each role, user and requestmap inside the database only... Now my requirement is to provide the menu items based on the users assigned roles... For e.g.: If my "User" Main Menu have following Items : Dashboard Import User Manage User And if I have assigned a roles of Dashboard and Import User to the user with a username "auditor" then, only following Menu items should be displayed on the screen : User (Main Menu) - Dashboard (sub menu) - Import User (sub menu) I have explored the Spring Security ACL plugin for the same, but it's using the Domain classes to get it working... So, wanted to know the convenient way to do so... Thanks in advance...

    Read the article

  • Using OAuth along with spring security, grails

    - by GroovyUser
    I have grails app which runs on the spring security plugin. It works with no problem. I wish I could give the users the way to connect with Facebook and social networking site. So I decided to use Spring Security OAuth plugin. I have configured the plugin. Now I want user can access both via normal local account and also the OAuth authentication. More precisely I have a controller like this: @Secured(['IS_AUTHENTICATED_FULLY']) def test() { render "Home page!!!" } Now I want this controller to be accessed with OAuth authentication too. Is that possible to do so?

    Read the article

  • Understanding CGI and SQL security from the ground up

    - by Steve
    This question is for learning purposes. Suppose I am writing a simple SQL admin console using CGI and Python. At http://something.com/admin, this admin console should allow me to modify a SQL database (i.e., create and modify tables, and create and modify records) using an ordinary form. In the least secure case, anybody can access http://something.com/admin and modify the database. You can password protect http://something.com/admin. But once you start using the admin console, information is still transmitted in plain text. So then you use HTTPS to secure the transmitted data. Questions: To describe to a learner, how would you incrementally add security to the least secure environment in order to make it most secure? How would you modify/augment my three (possibly erroneous) steps above? What basic tools in Python make your steps possible? Optional: Now that I understand the process, how do sophisticated libraries and frameworks inherently achieve this level of security?

    Read the article

  • Making files generally available on Linux system (when security is relatively unimportant)?

    - by Ole Thomsen Buus
    Hi, I am using Ubuntu 9.10 on a stationary PC. I have a secondary 1 TB harddrive with a single big logical partition (currently formatted as ext4). It is mounted as /usr3 with options user, exec in /etc/fstab. I am doing highspeed imaging experiments. Well, only 260fps, but that still creates many individual files since each frames is saved as one png-file. The stationary is not used by anyone other than me which is why the default security model posed by ubuntu is not necessary. What is the best way to make the entire contents of /usr3 generally available on all systems. In case I need to move the harddrive to another Ubuntu 9.x or 10.x machine? When grabbing image with the firewire camera I use a selfmade grabbing software-utility (console based) in sudo-mode. This creates all files with root as owner and group. I am logged in as user otb and usually I do the following when having to make files generally available to otb: sudo chown otb -R * sudo chgrp otb -R * sudo chmod a=rwx -R * This takes some time since the disk now contains individual ~200000 files. After this, how would linux behave if I moved the harddrive to another system where the user otb is also available? Would the files still be accessible without sudo use?

    Read the article

  • [GEEK SCHOOL] Network Security 2: Preventing Disaster with User Account Control

    - by Ciprian Rusen
    In this second lesson in our How-To Geek School about securing the Windows devices in your network, we will talk about User Account Control (UAC). Users encounter this feature each time they need to install desktop applications in Windows, when some applications need administrator permissions in order to work and when they have to change different system settings and files. UAC was introduced in Windows Vista as part of Microsoft’s “Trustworthy Computing” initiative. Basically, UAC is meant to act as a wedge between you and installing applications or making system changes. When you attempt to do either of these actions, UAC will pop up and interrupt you. You may either have to confirm you know what you’re doing, or even enter an administrator password if you don’t have those rights. Some users find UAC annoying and choose to disable it but this very important security feature of Windows (and we strongly caution against doing that). That’s why in this lesson, we will carefully explain what UAC is and everything it does. As you will see, this feature has an important role in keeping Windows safe from all kinds of security problems. In this lesson you will learn which activities may trigger a UAC prompt asking for permissions and how UAC can be set so that it strikes the best balance between usability and security. You will also learn what kind of information you can find in each UAC prompt. Last but not least, you will learn why you should never turn off this feature of Windows. By the time we’re done today, we think you will have a newly found appreciation for UAC, and will be able to find a happy medium between turning it off completely and letting it annoy you to distraction. What is UAC and How Does it Work? UAC or User Account Control is a security feature that helps prevent unauthorized system changes to your Windows computer or device. These changes can be made by users, applications, and sadly, malware (which is the biggest reason why UAC exists in the first place). When an important system change is initiated, Windows displays a UAC prompt asking for your permission to make the change. If you don’t give your approval, the change is not made. In Windows, you will encounter UAC prompts mostly when working with desktop applications that require administrative permissions. For example, in order to install an application, the installer (generally a setup.exe file) asks Windows for administrative permissions. UAC initiates an elevation prompt like the one shown earlier asking you whether it is okay to elevate permissions or not. If you say “Yes”, the installer starts as administrator and it is able to make the necessary system changes in order to install the application correctly. When the installer is closed, its administrator privileges are gone. If you run it again, the UAC prompt is shown again because your previous approval is not remembered. If you say “No”, the installer is not allowed to run and no system changes are made. If a system change is initiated from a user account that is not an administrator, e.g. the Guest account, the UAC prompt will also ask for the administrator password in order to give the necessary permissions. Without this password, the change won’t be made. Which Activities Trigger a UAC Prompt? There are many types of activities that may trigger a UAC prompt: Running a desktop application as an administrator Making changes to settings and files in the Windows and Program Files folders Installing or removing drivers and desktop applications Installing ActiveX controls Changing settings to Windows features like the Windows Firewall, UAC, Windows Update, Windows Defender, and others Adding, modifying, or removing user accounts Configuring Parental Controls in Windows 7 or Family Safety in Windows 8.x Running the Task Scheduler Restoring backed-up system files Viewing or changing the folders and files of another user account Changing the system date and time You will encounter UAC prompts during some or all of these activities, depending on how UAC is set on your Windows device. If this security feature is turned off, any user account or desktop application can make any of these changes without a prompt asking for permissions. In this scenario, the different forms of malware existing on the Internet will also have a higher chance of infecting and taking control of your system. In Windows 8.x operating systems you will never see a UAC prompt when working with apps from the Windows Store. That’s because these apps, by design, are not allowed to modify any system settings or files. You will encounter UAC prompts only when working with desktop programs. What You Can Learn from a UAC Prompt? When you see a UAC prompt on the screen, take time to read the information displayed so that you get a better understanding of what is going on. Each prompt first tells you the name of the program that wants to make system changes to your device, then you can see the verified publisher of that program. Dodgy software tends not to display this information and instead of a real company name, you will see an entry that says “Unknown”. If you have downloaded that program from a less than trustworthy source, then it might be better to select “No” in the UAC prompt. The prompt also shares the origin of the file that’s trying to make these changes. In most cases the file origin is “Hard drive on this computer”. You can learn more by pressing “Show details”. You will see an additional entry named “Program location” where you can see the physical location on your hard drive, for the file that’s trying to perform system changes. Make your choice based on the trust you have in the program you are trying to run and its publisher. If a less-known file from a suspicious location is requesting a UAC prompt, then you should seriously consider pressing “No”. What’s Different About Each UAC Level? Windows 7 and Windows 8.x have four UAC levels: Always notify – when this level is used, you are notified before desktop applications make changes that require administrator permissions or before you or another user account changes Windows settings like the ones mentioned earlier. When the UAC prompt is shown, the desktop is dimmed and you must choose “Yes” or “No” before you can do anything else. This is the most secure and also the most annoying way to set UAC because it triggers the most UAC prompts. Notify me only when programs/apps try to make changes to my computer (default) – Windows uses this as the default for UAC. When this level is used, you are notified before desktop applications make changes that require administrator permissions. If you are making system changes, UAC doesn’t show any prompts and it automatically gives you the necessary permissions for making the changes you desire. When a UAC prompt is shown, the desktop is dimmed and you must choose “Yes” or “No” before you can do anything else. This level is slightly less secure than the previous one because malicious programs can be created for simulating the keystrokes or mouse moves of a user and change system settings for you. If you have a good security solution in place, this scenario should never occur. Notify me only when programs/apps try to make changes to my computer (do not dim my desktop) – this level is different from the previous in in the fact that, when the UAC prompt is shown, the desktop is not dimmed. This decreases the security of your system because different kinds of desktop applications (including malware) might be able to interfere with the UAC prompt and approve changes that you might not want to be performed. Never notify – this level is the equivalent of turning off UAC. When using it, you have no protection against unauthorized system changes. Any desktop application and any user account can make system changes without your permission. How to Configure UAC If you would like to change the UAC level used by Windows, open the Control Panel, then go to “System and Security” and select “Action Center”. On the column on the left you will see an entry that says “Change User Account Control settings”. The “User Account Control Settings” window is now opened. Change the position of the UAC slider to the level you want applied then press “OK”. Depending on how UAC was initially set, you may receive a UAC prompt requiring you to confirm this change. Why You Should Never Turn Off UAC If you want to keep the security of your system at decent levels, you should never turn off UAC. When you disable it, everything and everyone can make system changes without your consent. This makes it easier for all kinds of malware to infect and take control of your system. It doesn’t matter whether you have a security suite or antivirus installed or third-party antivirus, basic common-sense measures like having UAC turned on make a big difference in keeping your devices safe from harm. We have noticed that some users disable UAC prior to setting up their Windows devices and installing third-party software on them. They keep it disabled while installing all the software they will use and enable it when done installing everything, so that they don’t have to deal with so many UAC prompts. Unfortunately this causes problems with some desktop applications. They may fail to work after you enable UAC. This happens because, when UAC is disabled, the virtualization techniques UAC uses for your applications are inactive. This means that certain user settings and files are installed in a different place and when you turn on UAC, applications stop working because they should be placed elsewhere. Therefore, whatever you do, do not turn off UAC completely! Coming up next … In the next lesson you will learn about Windows Defender, what this tool can do in Windows 7 and Windows 8.x, what’s different about it in these operating systems and how it can be used to increase the security of your system.

    Read the article

  • New Big Data Appliance Security Features

    - by mgubar
    The Oracle Big Data Appliance (BDA) is an engineered system for big data processing.  It greatly simplifies the deployment of an optimized Hadoop Cluster – whether that cluster is used for batch or real-time processing.  The vast majority of BDA customers are integrating the appliance with their Oracle Databases and they have certain expectations – especially around security.  Oracle Database customers have benefited from a rich set of security features:  encryption, redaction, data masking, database firewall, label based access control – and much, much more.  They want similar capabilities with their Hadoop cluster.    Unfortunately, Hadoop wasn’t developed with security in mind.  By default, a Hadoop cluster is insecure – the antithesis of an Oracle Database.  Some critical security features have been implemented – but even those capabilities are arduous to setup and configure.  Oracle believes that a key element of an optimized appliance is that its data should be secure.  Therefore, by default the BDA delivers the “AAA of security”: authentication, authorization and auditing. Security Starts at Authentication A successful security strategy is predicated on strong authentication – for both users and software services.  Consider the default configuration for a newly installed Oracle Database; it’s been a long time since you had a legitimate chance at accessing the database using the credentials “system/manager” or “scott/tiger”.  The default Oracle Database policy is to lock accounts thereby restricting access; administrators must consciously grant access to users. Default Authentication in Hadoop By default, a Hadoop cluster fails the authentication test. For example, it is easy for a malicious user to masquerade as any other user on the system.  Consider the following scenario that illustrates how a user can access any data on a Hadoop cluster by masquerading as a more privileged user.  In our scenario, the Hadoop cluster contains sensitive salary information in the file /user/hrdata/salaries.txt.  When logged in as the hr user, you can see the following files.  Notice, we’re using the Hadoop command line utilities for accessing the data: $ hadoop fs -ls /user/hrdataFound 1 items-rw-r--r--   1 oracle supergroup         70 2013-10-31 10:38 /user/hrdata/salaries.txt$ hadoop fs -cat /user/hrdata/salaries.txtTom Brady,11000000Tom Hanks,5000000Bob Smith,250000Oprah,300000000 User DrEvil has access to the cluster – and can see that there is an interesting folder called “hrdata”.  $ hadoop fs -ls /user Found 1 items drwx------   - hr supergroup          0 2013-10-31 10:38 /user/hrdata However, DrEvil cannot view the contents of the folder due to lack of access privileges: $ hadoop fs -ls /user/hrdata ls: Permission denied: user=drevil, access=READ_EXECUTE, inode="/user/hrdata":oracle:supergroup:drwx------ Accessing this data will not be a problem for DrEvil. He knows that the hr user owns the data by looking at the folder’s ACLs. To overcome this challenge, he will simply masquerade as the hr user. On his local machine, he adds the hr user, assigns that user a password, and then accesses the data on the Hadoop cluster: $ sudo useradd hr $ sudo passwd $ su hr $ hadoop fs -cat /user/hrdata/salaries.txt Tom Brady,11000000 Tom Hanks,5000000 Bob Smith,250000 Oprah,300000000 Hadoop has not authenticated the user; it trusts that the identity that has been presented is indeed the hr user. Therefore, sensitive data has been easily compromised. Clearly, the default security policy is inappropriate and dangerous to many organizations storing critical data in HDFS. Big Data Appliance Provides Secure Authentication The BDA provides secure authentication to the Hadoop cluster by default – preventing the type of masquerading described above. It accomplishes this thru Kerberos integration. Figure 1: Kerberos Integration The Key Distribution Center (KDC) is a server that has two components: an authentication server and a ticket granting service. The authentication server validates the identity of the user and service. Once authenticated, a client must request a ticket from the ticket granting service – allowing it to access the BDA’s NameNode, JobTracker, etc. At installation, you simply point the BDA to an external KDC or automatically install a highly available KDC on the BDA itself. Kerberos will then provide strong authentication for not just the end user – but also for important Hadoop services running on the appliance. You can now guarantee that users are who they claim to be – and rogue services (like fake data nodes) are not added to the system. It is common for organizations to want to leverage existing LDAP servers for common user and group management. Kerberos integrates with LDAP servers – allowing the principals and encryption keys to be stored in the common repository. This simplifies the deployment and administration of the secure environment. Authorize Access to Sensitive Data Kerberos-based authentication ensures secure access to the system and the establishment of a trusted identity – a prerequisite for any authorization scheme. Once this identity is established, you need to authorize access to the data. HDFS will authorize access to files using ACLs with the authorization specification applied using classic Linux-style commands like chmod and chown (e.g. hadoop fs -chown oracle:oracle /user/hrdata changes the ownership of the /user/hrdata folder to oracle). Authorization is applied at the user or group level – utilizing group membership found in the Linux environment (i.e. /etc/group) or in the LDAP server. For SQL-based data stores – like Hive and Impala – finer grained access control is required. Access to databases, tables, columns, etc. must be controlled. And, you want to leverage roles to facilitate administration. Apache Sentry is a new project that delivers fine grained access control; both Cloudera and Oracle are the project’s founding members. Sentry satisfies the following three authorization requirements: Secure Authorization:  the ability to control access to data and/or privileges on data for authenticated users. Fine-Grained Authorization:  the ability to give users access to a subset of the data (e.g. column) in a database Role-Based Authorization:  the ability to create/apply template-based privileges based on functional roles. With Sentry, “all”, “select” or “insert” privileges are granted to an object. The descendants of that object automatically inherit that privilege. A collection of privileges across many objects may be aggregated into a role – and users/groups are then assigned that role. This leads to simplified administration of security across the system. Figure 2: Object Hierarchy – granting a privilege on the database object will be inherited by its tables and views. Sentry is currently used by both Hive and Impala – but it is a framework that other data sources can leverage when offering fine-grained authorization. For example, one can expect Sentry to deliver authorization capabilities to Cloudera Search in the near future. Audit Hadoop Cluster Activity Auditing is a critical component to a secure system and is oftentimes required for SOX, PCI and other regulations. The BDA integrates with Oracle Audit Vault and Database Firewall – tracking different types of activity taking place on the cluster: Figure 3: Monitored Hadoop services. At the lowest level, every operation that accesses data in HDFS is captured. The HDFS audit log identifies the user who accessed the file, the time that file was accessed, the type of access (read, write, delete, list, etc.) and whether or not that file access was successful. The other auditing features include: MapReduce:  correlate the MapReduce job that accessed the file Oozie:  describes who ran what as part of a workflow Hive:  captures changes were made to the Hive metadata The audit data is captured in the Audit Vault Server – which integrates audit activity from a variety of sources, adding databases (Oracle, DB2, SQL Server) and operating systems to activity from the BDA. Figure 4: Consolidated audit data across the enterprise.  Once the data is in the Audit Vault server, you can leverage a rich set of prebuilt and custom reports to monitor all the activity in the enterprise. In addition, alerts may be defined to trigger violations of audit policies. Conclusion Security cannot be considered an afterthought in big data deployments. Across most organizations, Hadoop is managing sensitive data that must be protected; it is not simply crunching publicly available information used for search applications. The BDA provides a strong security foundation – ensuring users are only allowed to view authorized data and that data access is audited in a consolidated framework.

    Read the article

  • Connection to Google, Yahoo, Bing, Ask, etc. compromised via all devices on my home network - How?

    - by jt0dd
    I'm a very computer savvy guy (although not very networking savvy), and I may still be wrong about this, but I think my home network may be compromised somehow. I'd like to know if it's possible for someone to have hijacked my network's connection to Google.com and other popular websites. Update: The issue seems to take effect with all popular websites. I can connect to small (non-popular) websites without issue, but Facebook, Google, Yahoo, and Bing cannot be accessed by any device on my home network. On all devices using my home network, I'm being shown http://www.google.com WARNING! Internet Explorer is currently out of date. Please update to continue. when I attempt to connect to google.com. I wouldn't be surprised by this at all if it were just the laptop. It's the fact that this is happening on all devices on my network that confuses me. Here's the screenshot from my iPhone, for reference. Can my home network be compromised? Is that even possible? How can something like this happen across all platforms on all devices in the same way? I wouldn't imagine every device / platform on the network would get the same virus. Should I assume that my network's security is totally compromised? Update: All mobile devices and laptops on my home network are experiencing the same alert when attempting to connect to google.com.

    Read the article

  • What steps should I take to secure Tomcat 6.x?

    - by PAS
    I am in the process of setting up an new Tomcat deployment, and want it to be as secure as possible. I have created a 'jakarta' user and have jsvc running Tomcat as a daemon. Any tips on directory permissions and such to limit access to Tomcat's files? I know I will need to remove the default webapps - docs, examples, etc... are there any best practices I should be using here? What about all the config XML files? Any tips there? Is it worth enabling the Security manager so that webapps run in a sandbox? Has anyone had experience setting this up? I have seen examples of people running two instances of Tomcat behind Apache. It seems this can be done using mod_jk or with mod_proxy... any pros/cons of either? Is it worth the trouble? In case it matters, the OS is Debian lenny. I am not using apt-get because lenny only offers tomcat 5.5 and we require 6.x. Thanks!

    Read the article

  • OWSM custom security policy for JAX-WS, GenericFault

    - by sachin
    Hi, I tried creating custom security and policy as given here: http://download.oracle.com/docs/cd/E15523_01/relnotes.1111/e10132/owsm.htm#CIADFGGC when I run the service client custom assertion is executed, returning successfully. public IResult execute(IContext context) throws WSMException { try { System.out.println("public execute"); IAssertionBindings bindings = ((SimpleAssertion)(this.assertion)).getBindings(); IConfig config = bindings.getConfigs().get(0); IPropertySet propertyset = config.getPropertySets().get(0); String valid_ips = propertyset.getPropertyByName("valid_ips").getValue(); String ipAddr = ((IMessageContext)context).getRemoteAddr(); IResult result = new Result(); System.out.println("valid_ips "+valid_ips); if (valid_ips != null && valid_ips.trim().length() > 0) { String[] valid_ips_array = valid_ips.split(","); boolean isPresent = false; for (String valid_ip : valid_ips_array) { if (ipAddr.equals(valid_ip.trim())) { isPresent = true; } } System.out.println("isPresent "+isPresent); if (isPresent) { result.setStatus(IResult.SUCCEEDED); } else { result.setStatus(IResult.FAILED); result.setFault(new WSMException(WSMException.FAULT_FAILED_CHECK)); } } else { result.setStatus(IResult.SUCCEEDED); } System.out.println("result "+result); System.out.println("public execute complete"); return result; } catch (Exception e) { System.out.println("Exception e"); e.printStackTrace(); throw new WSMException(WSMException.FAULT_FAILED_CHECK, e); } } Console output is: public execute valid_ips 127.0.0.1,192.168.1.1 isPresent true result Succeeded public execute complete but, webservice throws GenericFault . Arguments: [void] Fault: GenericFault : generic error I have no clue what could be wrong, any ideas? here is the full stack trace: Exception in thread "main" javax.xml.ws.soap.SOAPFaultException: GenericFault : generic error at com.sun.xml.internal.ws.fault.SOAP12Fault.getProtocolException(SOAP12Fault.java:210) at com.sun.xml.internal.ws.fault.SOAPFaultBuilder.createException(SOAPFaultBuilder.java:119) at com.sun.xml.internal.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:108) at com.sun.xml.internal.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:78) at com.sun.xml.internal.ws.client.sei.SEIStub.invoke(SEIStub.java:107) at $Proxy30.sayHello(Unknown Source) at creditproxy.CreditRatingSoap12HttpPortClient.main(CreditRatingSoap12HttpPortClient.java:21) Caused by: javax.xml.ws.soap.SOAPFaultException: GenericFault : generic error at weblogic.wsee.jaxws.framework.jaxrpc.TubeFactory$JAXRPCTube.processRequest(TubeFactory.java:203) at weblogic.wsee.jaxws.tubeline.FlowControlTube.processRequest(FlowControlTube.java:99) at com.sun.xml.ws.api.pipe.Fiber.__doRun(Fiber.java:604) at com.sun.xml.ws.api.pipe.Fiber._doRun(Fiber.java:563) at com.sun.xml.ws.api.pipe.Fiber.doRun(Fiber.java:548) at com.sun.xml.ws.api.pipe.Fiber.runSync(Fiber.java:445) at com.sun.xml.ws.server.WSEndpointImpl$2.process(WSEndpointImpl.java:275) at com.sun.xml.ws.transport.http.HttpAdapter$HttpToolkit.handle(HttpAdapter.java:454) at com.sun.xml.ws.transport.http.HttpAdapter.handle(HttpAdapter.java:250) at com.sun.xml.ws.transport.http.servlet.ServletAdapter.handle(ServletAdapter.java:140) at weblogic.wsee.jaxws.HttpServletAdapter$AuthorizedInvoke.run(HttpServletAdapter.java:319) at weblogic.wsee.jaxws.HttpServletAdapter.post(HttpServletAdapter.java:232) at weblogic.wsee.jaxws.JAXWSServlet.doPost(JAXWSServlet.java:310) at javax.servlet.http.HttpServlet.service(HttpServlet.java:727) at weblogic.wsee.jaxws.JAXWSServlet.service(JAXWSServlet.java:87) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227) at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125) at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292) at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26) at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56) at oracle.dms.wls.DMSServletFilter.doFilter(DMSServletFilter.java:326) at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56) at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3592) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121) at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202) at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108) at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201) at weblogic.work.ExecuteThread.run(ExecuteThread.java:173) Process exited with exit code 1.

    Read the article

  • Spring Security session-management setting and IllegalStateException

    - by JayL
    I'm trying to add <session-management> in my Spring Security namespace configuration so that I can provide a different message than the login page when the session times out. As soon as I add it to my configuration it starts throwing "IllegalStateException: Cannot create a session after the response has been committed" when I access the app. I'm using Spring Security 3 and Tomcat 6. Here's my configuration: <http> <intercept-url pattern="/go.htm" access="ROLE_RESPONDENT" /> <intercept-url pattern="/complete.htm" access="ROLE_RESPONDENT" /> <intercept-url pattern="/**" access="IS_AUTHENTICATED_ANONYMOUSLY" /> <form-login login-processing-url="/j_spring_security_check" login-page="/login.htm" authentication-failure-url="/login.htm?error=true" default-target-url="/go.htm" /> <anonymous/> <logout logout-success-url="/logout_message.htm"/> <session-management invalid-session-url="/login.htm" /> </http> Everything works great until I add in the <session-management> line. What am I missing?

    Read the article

  • security policy error iphone ipod touch issue

    - by Joey
    I'm getting an "Error from Debugger: Error launching remote program: security policy error" when I try to run my app on my ipod touch. The provisions look in order, and the app builds to my iphone 3gs just fine. The app used to build just fine to my ipod touch, so I'm flustered what could have changed and wondering if anyone has any thoughts on what might be causing this issue. The build logs are below. Mon Mar 15 14:25:54 unknown com.apple.debugserver-43[449] : Connecting to com.apple.debugserver service... Mon Mar 15 14:25:55 unknown SpringBoard[24] : Unable to launch com.yourcompany.Unearthed because it has an invalid code signature, inadequate entitlements or its profile has not been explicitly trusted by the user. Mon Mar 15 14:25:55 unknown com.apple.debugserver-43[449] : error: unable to launch the application with CFBundleIdentifier 'com.yourcompany.Unearthed' sbs_error = 9 Mon Mar 15 14:25:55 unknown com.apple.debugserver-43[449] : 1 [01c1/0903]: RNBRunLoopLaunchInferior DNBProcessLaunch() returned error: '' Mon Mar 15 14:25:55 unknown com.apple.debugserver-43[449] : error: failed to launch process (null): security policy error Mon Mar 15 14:26:03 unknown MobileSafari[72] : void SendDelegateMessage(NSInvocation*): delegate (webView:decidePolicyForNavigationAction:request:frame:decisionListener:) failed to return after waiting 10 seconds. main run loop mode: UITrackingRunLoopMode

    Read the article

  • How to access/use custom attribute in spring security based CAS client

    - by Bill Li
    I need send certain attributes(say, human readable user name) from server to client after a successful authentication. Server part was done. Now attribute was sent to client. From log, I can see: 2010-03-28 23:48:56,669 DEBUG Cas20ServiceTicketValidator:185 - Server response: [email protected] <cas:proxyGrantingTicket>PGTIOU-1-QZgcN61oAZcunsC9aKxj-cas</cas:proxyGrantingTicket> <cas:attributes> <cas:FullName>Test account 1</cas:FullName> </cas:attributes> </cas:authenticationSuccess> </cas:serviceResponse> Yet, I don't know how to access the attribute in client(I am using Spring security 2.0.5). In authenticationProvider, a userDetailsService is configured to read db for authenticated principal. <bean id="casAuthenticationProvider" class="org.springframework.security.providers.cas.CasAuthenticationProvider"> <sec:custom-authentication-provider /> <property name="userDetailsService" ref="clerkManager"/> <!-- other stuff goes here --> </bean> Now in my controller, I can easily do this: Clerk currentClerk = (Clerk)SecurityContextHolder.getContext().getAuthentication().getPrincipal(); Ideally, I can fill the attribute to this Clerk object as another property in some way. How to do this? Or what is recommended approach to share attributes across all apps under CAS's centralized nature?

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >