Search Results

Search found 25324 results on 1013 pages for 'folder security'.

Page 93/1013 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • Are programming languages perfect?

    - by mohabitar
    I'm not sure if I'm being naive, as I'm still a student, but a curious question came to my mind. In another thread here, a user stated that in order to protect against piracy of your software, you must have perfect software. So is it possible to have perfect software? This is an extremely silly hypothetical situation, but if you were to gather the most talented and gifted programmers in the world and have them spend years trying to create 'perfect' software, could they be successful? Could it be that not a single exploitable bug could be created? Or are there flaws in programming languages that can still, no matter how hard you try, cause bugs that allow your program to be hijacked? As you can tell, I know nothing about security, but essentially what I'm asking is: is the reason why software is easily exploitable the fact that imperfect human beings create it, or that imperfect programming languages are being used?

    Read the article

  • Why is this by passing the SUDO password?

    - by John Isaacks
    I have a bash script I am using to automate a SVN checkout. The contents of the file were: #!/bin/bash cd /var/www-cake sudo svn checkout file:///usr/local/svn/bash_repo/repo/ Then when I double click the file it would ask me what to do, I would click the button "Run In Terminal" and then a terminal would pop up and ask me for the SUDO password. I would enter it, the script would execute and the terminal would close. I wanted to give some sort of indication that the script ran successfully so I edited my file to look like: #!/bin/bash cd /var/www-cake sudo svn checkout file:///usr/local/svn/bash_repo/repo/ echo "Head revision has been pushed to live server" I expected the terminal to now stay open and tell me the message afterwards. To my surprise it now opens and immediately closes. The script does execute and I no longer have to put in the SUDO password. Is this right? I do not understand why this is happening, seems like a security issue.

    Read the article

  • Opensource package for securly allowing users to log in and provide information

    - by JTS
    I have a site written in mostly php and html. I also have a sql database of personal information like names and addresses. I would like my users to be able to log in to my website with a login I can email or snail mail to them, and view and edit their information on my database. Users can currently enter information online I and store it in my database but they can't view or edit stored information. I can add the code to do this, but when I give users the ability to view information I suddenly have a lot more security concerns. Is there an open source package to deal with allowing users to do something like this? Or is there an established convention for this? I know this is a pretty basic question, and there might be some good literature about it that I have yet to find, so if someone can just point me in the direction of some of that information, or better yet give me firsthand some information about this that would be great.

    Read the article

  • Is there any good reason I would want my website to be framed?

    - by minitech
    I'm building a website that's not security-critical in any way at all, so having somebody put a page in an <iframe> is not particularly dangerous to its users. However, as my website doesn't have script plugins that will be used anywhere else, is there any reason why I shouldn't just apply: X-Frame-Options: Deny to every page on my website? Is there any valid reason for any other website to embed mine? I've seen plenty of content-stealing ones and attempts to hijack user accounts, but never an actual good usage of frames that's not an explicit feature of the website.

    Read the article

  • How to get rid of crawling errors due to the URL Encoded Slashes (%2F) problem in Apache

    - by user14198
    The Google web crawler has indexed a whole set of URLs with encoded slashes (%2F) for our site. I assume it has picked up the pages from our XML sitemap file. The problem is that the live pages will actually result in a failure because of the Url Encoded Slashes Problem in Apache. Some solutions are mentioned here We are implementing a 301 redirect scheme for all the error pages. This should make the Google bot delete the pages from the crawling errors (no more crashing pages). Does implementing the 301s require the pages to be "live"? In that case we may be forced to implement solution 1 in the article. The problem is that solution 1 will pose a security vulnerability..

    Read the article

  • Is the escaping provided by the Google-Gson library enough to ensure a safe JSON payload?

    - by Lifetime_Learner
    I am currently using the Google-Gson library to convert Java objects into JSON inside a web service. Once the object has been converted to JSON, it is returned to the client to be converted into a JSON object using the JavaScript eval() function. Is the character escaping provided by the Gson library enough to ensure that nothing nasty will happen when I run the eval() function on the JSON payload? Do I need to HTML Encode the Strings in the Java Objects before passing them to the Gson library? Are there any other security concerns that I should be aware of?

    Read the article

  • Why don't smart phones have an auto-forget password feature? [closed]

    - by Kelvin
    Storing passwords to external services (e.g. corporate email servers) on smart phones is very insecure, since phones are more easily stolen. Has any vendor implemented a feature to only cache a password in memory for a limited amount of time? After the time period has elapsed, the app would ask for the password again. EDIT: I should've clarified - I'm aware that many (most?) users are lazy and want to just "set it and forget it". The always-remember feature will probably always be present. I was curious about an option to enable auto-forget for the security-conscious.

    Read the article

  • Is this fix for Avast Antivirus crashing safe to use?

    - by TmRn
    Well I have installed avast anti virus on Ubuntu 12.04. But after updating, it crashes! So I have made some tweaks like below: Press Ctrl+Alt+T to open the Terminal. When it opens, run the command below. sudo gedit /etc/init.d/rcS Type your password and hit Enter. When the text file opens, add the line: sysctl -w kernel.shmmax=128000000 Make sure the line you added is before: exec /etc/init.d/rc S This is what it should look like: #! /bin/sh # rcS # # Call all S??* scripts in /etc/rcS.d/ in numerical/alphabetical order # sysctl -w kernel.shmmax=128000000 exec /etc/init.d/rc S Save the file. Reboot. My question is: Did I do anything wrong? I mean as I have made some tweaks, will it lower the security of Avast down like viruses do? Please if you are a programmer check this if it contains bug or harmful intentions... Thanks.

    Read the article

  • Drive By Download Issue

    - by mprototype
    I'm getting a drive by download issue reported on www.cottonsandwichquiltshop.com/catalog/index.php?manufacturers_id=19&sort=2a&filterid=61 reported from safeweb.norton.com when I scan the root url. I have dug through the entire site architecture, and code base and removed a few files that were malicious, i upgraded the site's framework and fixed the security holes (mostly sql injection concerns)..... However this one threat still exists and I can't locate it for the life of me, or find any valid research or information on removing this type of threat at the server level, mostly just a bunch of anti-virus software wanting to sell you on their ability to manage it on the client end. PLEASE HELP Thanks.

    Read the article

  • Combining a content management system with ASP.NET

    - by Ek0nomik
    I am going to be creating a site that seems like it requires a blend of a content management system (CMS) and some custom web development (which is done in ASP.NET MVC). I have plenty of web development experience to understand the ASP.NET MVC side of the fence, but, I don't have a lot of CMS knowledge aside from getting one stood up. Right now my biggest question is around integrating security from ASP.NET with the CMS. I currently have an ASP.NET MVC site that handles the authentication for multiple production sites and creates an authentication cookie under our domain (*.example.com). The page acts like a single sign on page since the cookie is a wildcard and can be used in any other applications of the same domain. I'd really like to avoid having users put in their credentials twice. Is there a CMS that will play well with the ASP.NET Forms Authentication given how I have these existing applications structured? As an aside, right now I am leaning towards Drupal, but, that isn't finalized.

    Read the article

  • Apache: DoS with mod_deflate & range requests, tomcat also? [migrated]

    - by VextoR
    I know that apache has a security bug http://seclists.org/fulldisclosure/2011/Aug/175 So if you do this command: curl -I -H "Range: bytes=0-1,0-2" -s www.yandex.ru/robots.txt it says HTTP/1.1 206 Partial Content it means, the problem is exist. But the fact is, that for apache tomcat (our server) curl says 206 Partial Content as well. So we need to fix it. I found solution for apache HTTP (.htaccess, mod_headers) but not for tomcat. I'm very newbie for servers things, so can't understand most, so please help

    Read the article

  • Access Token Verification

    - by DecafCoder
    I have spent quite a few days reading up on Oauth and token based security measures for REST API's and I am currently looking at implementing an Oauth based authentication approach almost exactly like the one described in this post (OAuth alternative for a 2 party system). From what I understand, the token is to be verified upon each request to the resource server. This means the resource server would need to retrieve the token from a datastore to verify the clients token. Given this would have to happen upon every request I am concerned about the speed implications of hitting a datastore like MySQL or NoSQL upon every request just to verify the token. Is this the standard way to verify tokens by having them stored in a RDBMS or NoSQL database and retrieved upon each request? Or is it a suitable solution to have them cached (baring in mind that we are talking millions of users)?

    Read the article

  • Need private personal access to ~three PHP pages

    - by Roger
    I would like secure access to the text output by three PHP scripts (the text output is JavaScript and html) . The security level is much less then financial data but important none-the-less. I have considered purchasing AND studying https and SSL certificates. Hostgator charges an extra $2/month for a private ip plus $50+ anually for a certificate. This is more then I want to spend for this project (time + money). Is there a simpler solution that is: less expensive easier to implement. I'm open to different approaches.

    Read the article

  • Where can I hire a trustworthy professional PHP programmer?

    - by JJ22
    I wrote a php application for my website that really needs to work well and be as secure as possible. I'm a novice php programmer, so while my application seems to work well, there may be inefficiencies or security vulnerabilities. I feel that I should have someone look over my code before making the application publicly available, but I'm hesitant to just post it online because it handles some rather sensitive things. Where can I find a competent, trustworthy, and relatively inexpensive php programmer who would be willing to review a few thousand lies of well-commented easy-to-read php code? Thank you!

    Read the article

  • What to do about this gnome-keyring message?

    - by arroy_0209
    I upgraded from ubuntu 10.04 to 12.04 and installed lxde. Since then whenever I try to print some file (or use command lpstat), I get this message on the terminal: "WARNING: gnome-keyring:: couldn't connect to: /tmp/keyring-SZ59jJ/pkcs11: No such file or directory". This is beyond my knowledge and from search I only realize that this mey be related to security (as learned from gnome-keyring on wikipedia). I have no idea what to about this warning. Can anybody please suggest? Evidently as stated, I am not using gnome desktop, I choose lxde session at the time of logging in.

    Read the article

  • A Safe Way to Allow Upload of All File Types?

    - by user34682
    By default WordPress restricts the file types that can be uploaded to /uploads using the default Media Manager. I know it is possible to manually extend the allowed file types. I also know it is possible to change functions.php to allow ALL file types to be uploaded. This restriction obviously exists for security concerns - e.g. someone could upload a harmful .exe Would it not be possible to allow secure upload of all filetypes by setting the permissions of the /uploads directory to prevent execution of any of its contents? Thus it wouldn't matter if someone uploaded a harmful file because it would not be executable on the server...

    Read the article

  • Network authentication + roaming home directory - which technology should I look into using?

    - by Brian
    I'm looking into software which provides a user with a single identity across multiple computers. That is, a user should have the same permissions on each computer, and the user should have access to all of his or her files (roaming home directory) on each computer. There seem to be many solutions for this general idea, but I'm trying to determine the best one for me. Here are some details along with requirements: The network of machines are Amazon EC2 instances running Ubuntu. We access the machines with SSH. Some machines on this LAN may have different uses, but I am only discussing machines for a certain use (running a multi-tenancy platform). The system will not necessarily have a constant amount of machines. We may have to permanently or temporarily alter the amount of machines running. This is the the reason why I'm looking into centralized authentication/storage. The implementation of this effect should be a secure one. We're unsure if users will have direct shell access, but their software will potentially be running (under restricted Linux user names, of course) on our systems, which is as good as direct shell access. Let's assume that their software could potentially be malicious for the sake of security. I have heard of several technologies/combinations to achieve my goal, but I'm unsure of the ramifications of each. An older ServerFault post recommended NFS & NIS, though the combination has security problems according to this old article by Symantec. The article suggests moving to NIS+, but, as it is old, this Wikipedia article has cited statements suggesting a trending away from NIS+ by Sun. The recommended replacement is another thing I have heard of... LDAP. It looks like LDAP can be used to save user information in a centralized location on a network. NFS would still need to be used to cover the 'roaming home folder' requirement, but I see references of them being used together. Since the Symantec article pointed out security problems in both NIS and NFS, is there software to replace NFS, or should I heed that article's suggestions for locking it down? I'm tending toward LDAP because another fundamental piece of our architecture, RabbitMQ, has a authentication/authorization plugin for LDAP. RabbitMQ will be accessible in a restricted manner to users on the system, so I would like to tie the security systems together if possible. Kerberos is another secure authentication protocol that I have heard of. I learned a bit about it some years ago in a cryptography class but don't remember much about it. I have seen suggestions online that it can be combined with LDAP in several ways. Is this necessary? What are the security risks of LDAP without Kerberos? I also remember Kerberos being used in another piece of software developed by Carnegie Mellon University... Andrew File System, or AFS. OpenAFS is available for use, though its setup seems a bit complicated. At my university, AFS provides both requirements... I can log in to any machine, and my "AFS folder" is always available (at least when I acquire an AFS token). Along with suggestions for which path I should look into, does anybody have any guides which were particularly helpful? As the bold text pointed out, LDAP looks to be the best choice, but I'm particularly interested in the implementation details (Keberos? NFS?) with respect to security.

    Read the article

  • WCF client encrypt message to JAVA WS using username_token with message protection client policy

    - by Alex
    I am trying to create a WCF client APP that is consuming a JAVA WS that uses username_token with message protection client policy. There is a private key that is installed on the server and a public certificate file was exported from the JKS keystore file. I have installed the public key into certificate store via MMC under Personal certificates. I am trying to create a binding that will encrypt the message and pass the username as part of the payload. I have been researching and trying the different configurations for about a day now. I found a similar situation on the msdn forum: http://social.msdn.microsoft.com/Forums/en/wcf/thread/ce4b1bf5-8357-4e15-beb7-2e71b27d7415 This is the configuration that I am using in my app.config <customBinding> <binding name="certbinding"> <security authenticationMode="UserNameOverTransport"> <secureConversationBootstrap /> </security> <httpsTransport requireClientCertificate="true" /> </binding> </customBinding> <endpoint address="https://localhost:8443/ZZZService?wsdl" binding="customBinding" bindingConfiguration="cbinding" contract="XXX.YYYPortType" name="ServiceEndPointCfg" /> And this is the client code that I am using: EndpointAddress endpointAddress = new EndpointAddress(url + "?wsdl"); P6.WCF.Project.ProjectPortTypeClient proxy = new P6.WCF.Project.ProjectPortTypeClient("ServiceEndPointCfg", endpointAddress); proxy.ClientCredentials.UserName.UserName = UserName; proxy.ClientCredentials.ClientCertificate.SetCertificate(StoreLocation.CurrentUser, StoreName.My, X509FindType.FindByThumbprint, "67 87 ba 28 80 a6 27 f8 01 a6 53 2f 4a 43 3b 47 3e 88 5a c1"); var projects = proxy.ReadProjects(readProjects); This is the .NET CLient error I get: Error Log: Invalid security information. On the Java WS side I trace the log : SEVERE: Encryption is enabled but there is no encrypted key in the request. I traced the SOAP headers and payload and did confirm the encrypted key is not there. Headers: {expect=[100-continue], content-type=[text/xml; charset=utf-8], connection=[Keep-Alive], host=[localhost:8443], Content-Length=[731], vsdebuggercausalitydata=[uIDPo6hC1kng3ehImoceZNpAjXsAAAAAUBpXWdHrtkSTXPWB7oOvGZwi7MLEYUZKuRTz1XkJ3soACQAA], SOAPAction=[""], Content-Type=[text/xml; charset=utf-8]} Payload: <s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"><s:Header><o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"><o:UsernameToken u:Id="uuid-5809743b-d6e1-41a3-bc7c-66eba0a00998-1"><o:Username>admin</o:Username><o:Password>admin</o:Password></o:UsernameToken></o:Security></s:Header><s:Body xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><ReadProjects xmlns="http://xmlns.dev.com/WS/Project/V1"><Field>ObjectId</Field><Filter>Id='WS-Demo'</Filter></ReadProjects></s:Body></s:Envelope> I have also tryed some other bindings but with no success: <basicHttpBinding> <binding name="basicHttp"> <security mode="TransportWithMessageCredential"> <message clientCredentialType="Certificate"/> </security> </binding> </basicHttpBinding> <wsHttpBinding> <binding name="wsBinding"> <security mode="Message"> <message clientCredentialType="UserName" negotiateServiceCredential="false" /> </security> </binding> </wsHttpBinding> Your help will be greatly aprreciatted! Thanks!

    Read the article

  • Outlook 2007 Backup to D:\Outlook Fails - Access Denied, Write-Protected or File In Use

    - by nicorellius
    I can successfully save the Outlook PST file to the default location on the C drive (C:\Documents and Settings\user\ ... \Outlook) but when I change the backup save to directory to Outlook on the D drive I get the error: Cannot copy Outlook: Access is denied. Make sure the disk is not full or write protected and that the file is not currently in use. I suppose it is not that crucial that I save this file here, but I have never seen this problem before and I have made this same change in the past. I did some searching in this knowledge exchange as well as elsewhere on changing permissions, etc, but this didn't help. I discovered that the folder on my D drive (called Outlook) is not write-protected and nor is it read-only, as I can save to and modify files in that directory, as well as rename and delete the directory itself. At the time when I installed this version of Outlook, I used a previously saved Personal Folder (a backup PST file) and I thought having this still open in Outlook was causing the trouble. But I closed it and still have the same problem. I know this is probably a silly error on my part but I would like to figure it out. I'm new to superuser, but the answers I see are usually very good, so I thought I would post my first question. Thanks in advance.

    Read the article

  • GnuPG PHP gnupg Folder & Files Permission

    - by Michael Robinson
    Situation: we plan on using PHP's GnuPG extension to encrypt/decrypt files. Currently we've setup some test cases, using keys generated with GPG. The generated files reside in: /Users/username/.gnupg/ I am able to get keyinfo for the key I want to use to encrypt/decrypt, but when I attempt to use addencryptkey, I get: (E_WARNING: 2): gnupg::addencryptkey() [gnupg.addencryptkey]: get_key failed I think this is due to the permissions on the ~/.gnupg folder & enclosed files. The files are owned by me - username, but apache runs as www. A few days ago I did have this working, but it seems each time I use GPG Keychain Access to import / export a key, the folder's permissions are changed. Question: What are the exact permissions required to allow PHP's GnuPG to add encrypt & decrypt keys?

    Read the article

  • Reset UAC on a folder after permitting access

    - by Roel Vlemmings
    When I access a folder in Program Files I get a UAC prompt confirming with me that I want to permit access to it. I say "yes" and get permanent access. My question is: how do I reset this folder so that it prompts me again. The reason I ask is that I am testing a .NET program I wrote that deals with directory access permissions and I need to test it with the UAC protection on. But now that I have permitted access manually, I can't figure out how to turn the UAC protection back on to retest my program.

    Read the article

  • Set up iis7.5 to deny connections outside of LAN for certain folder

    - by Darkcat Studios
    Im setting up a combined website and extranet currently, they both read from the same database on the same server as the site is hosted on. The reason being that the website is fed from the data that the staff plug into the extranet interface. it also links in to AD for authorising access to the extranet. I have the extranet in a folder within the website folder. What I want to do is only allow the extranet to be accessed from computers within our LAN, but allow the main website to be freely accessible to internet users. I have it set up as a generic web server currently, so anyone can view anything (well up to the point where the user is asked to log into the extranet of course! I have read a lot on this but nothing I read applies to, or works in IIS7.5

    Read the article

  • How to access shared folder between CentOS 6 and WinXP under vmware player 4

    - by q0987
    Based on Sharing Files between CentOS 6 and Windows Systems with Samba I am able to follow until section of "Accessing Samba Shares". however, my system is WinXP and I don't see CentOS 6 icon at all. In my left panel of explorer, I only see "My network places". For section "Accessing Windows Shares from CentOS 6", after I click workgroup, I only see a localhost icon. When I click this icon, I see "unable to mount location" error from CentOS. BTW: I have set the shared folder within WMWare Player 6.0. Whatelse I have to do in order to make the shared folder work? Thank you

    Read the article

  • Minidump folder cannot be found

    - by Saxtus
    Although I have enabled the creation of minidump files in my system, it appears that either Windows doesn't create them where the Startup and recovery dialog points to (%SystemRoot%\Minidump) or at least I can't find them. Even the Minidump folder under Windows directory is missing and I had numerous BSODs till now. I've searched all my HDs for mini*.dmp files only to find some old ones in a backup folder from my Vista x86 installation, before I install Windows 7 x64. Any thoughts of why this is happening and how to fix it? Pre-thanx!

    Read the article

  • Outlook folder structure Template

    - by Filip Ekberg
    Having a lot of different customers and a lot of different areas to work with makes it trivial to have your mail folders in order. Everytime I get a new Project / Customer I want to add a certain Folder Structure in my "Customer" / "Project" sub directory. It might look like this: Customer_name/ Bugs Documents Important Support/ Done And as it is today, I have to manually add these manually, which is harsh when you have a lot of it going on and each sub directory under the customer_name directory needs to have "display all items" since it's important to me to see all Items in Bugs / Support / Important. Makes my life easier. So, Is it possible to Automize the process somehow? Macro? Folder Templates? What are my options?

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >