Search Results

Search found 6152 results on 247 pages for 'known'.

Page 182/247 | < Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >

  • Scanning website for vulnerablities

    - by Kristen
    I have found that the local school's website installed a Perl Calendar - this was years ago, it has not been used for ages, but Google has it indexed (which is how I found it) and it full of Viagra links and the like ... program was by Matt Kruse, here is details of the exploit: http://www.securiteam.com/exploits/5IP040A1QI.html I've got the school to remove that, but I think they also have MySQL installed and I'm aware that out-of-the-box there have been some exploits of Admin Tools / Login in old versions. For all I know they also have PHPBB and the like installed ... The school is just using some cheap, shared hosting; the HTTP response header I get is: Apache/1.3.29 (Unix) (Red-Hat/Linux) Chili!Soft-ASP/3.6.2 mod_ssl/2.8.14 OpenSSL/0.9.6b PHP/4.4.9 FrontPage/5.0.2.2510 I'm looking for some means of checking if they have other junk installed (quite possibly from way back, and now unused) that might put the site at risk. I'm more interested in something that can scan for things like the MySQL Admin exploit rather than open ports etc. My guess is that they have little control over the hosting space that they have - but I'm a Windows DEV, so this *nix stuff is all Greek to me. I found http://www.beyondsecurity.com/ which looks like it might do what I want (within their evaluation :) ) but I have a worry about how to find out if they are well known / honest - otherwise I will be tipping them a wink with a Domain Name that may be at risk! Many thanks.

    Read the article

  • How to use the correct SSH private key?

    - by Dail
    I have a private key inside /home/myuser/.ssh/privateKey I have a problem connecting to the ssh server, because i always get: Permission denied (publickey). I tried to debug the problem and i find that ssh is reading wrong file, take a look at the output: [damiano@Damiano-PC .ssh]$ ssh -v root@vps1 OpenSSH_5.8p2, OpenSSL 1.0.0g-fips 18 Jan 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for vps1 debug1: Applying options for * debug1: Connecting to 111.111.111.111 [111.111.111.111] port 2000. debug1: Connection established. debug1: identity file /home/damiano/.ssh/id_rsa type -1 debug1: identity file /home/damiano/.ssh/id_rsa-cert type -1 debug1: identity file /home/damiano/.ssh/id_dsa type -1 debug1: identity file /home/damiano/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.8p1 Debian-7ubuntu1 debug1: match: OpenSSH_5.8p1 Debian-7ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA 74:8f:87:fe:b8:25:85:02:d4:b6:5e:03:08:d0:9f:4e debug1: Host '[111.111.111.111]:2000' is known and matches the RSA host key. debug1: Found key in /home/damiano/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /home/damiano/.ssh/id_rsa debug1: Trying private key: /home/damiano/.ssh/id_dsa debug1: No more authentication methods to try. as you can see ssh is trying to read: /home/damiano/.ssh/id_rsa but i don't have this file, i named it differently. How could I tell to SSH to use the correct private key file? Thanks!

    Read the article

  • How to edit known_hosts when several hosts share the same IP and DNS name?

    - by Frédéric Grosshans
    I regularly ssh into a computer which is a dual-boot OS X / Linux computer. The two OS instance do not share the same host key, so they can be seen as two host sharing the same IP and DNS. Let's say the IP is 192.168.0.9, and the names are hostname and hostname.domainname As far as I understood, the solution to be able to connect to the two host is to add them both to the ~/.ssh/know_hosts file. However, it is easier said than done, because the file is hashed, and has probably several entries per host (192.168.0.9, hostname, hostname.domainname). As a consequence, I have the following warning Warning: the ECDSA host key for 'hostname' differs from the key for the IP address '192.168.0.9' Is there an easy way to edit the known_hosts file, while keeping the hashes. For example, how can I find the lines corresponding to a given hostame? How can I generate the hashes for some known hosts? The ideal solution would allow me to connect to seamlessly to this computer with ssh, no matter whether I call it 192.168.0.9, hostname or hostname.domainname, nor if it uses its Linux hostkey or its OSX hostkey. However, I still want to receive a warning if there is a real man-in-the middle attack, i.e. if another key than these two is used.

    Read the article

  • (Zywall USG 300) NAT bypassed when accessing in-house-server From LAN Via domain name

    - by mschr
    My situations is like this; i host a number of websites from within our joint network solution. On the network is basically 3 categories: the known public, registered via mac, given static dhcp lease the anonymous lan connections, given lease from specific dhcp range switches, unix hosts firewall Now, consider following hosts which are of interest 111.111.111.111 (Zywall USG 300 WAN) 192.168.1.1 (ZyWall USG 300 LAN) load balances and bw monitors plus handles NAT 192.168.1.2 (Linux www) serves mydomain1.tld and mydomain2.tld 192.168.123.123 (Random LAN client) accesses mydomain1.tld from LAN 23.234.12.253 (Random External client) accesses mydomain1.tld via WAN DNS A records are setup so that both mydomain1.tld and mydomain2.tld points to 111.111.111.111 - and the Linux www serves the http parts with VirtualHost configurations, setting up the document roots pr ServerName, this is not so interesting though.. NAT rule translates 111.111.111.111:80 to 192.168.1.2:80 (1:1 NAT) Our problem follows; When accessing http://mydomain1.tld from outside (23.234.12.253 example host) the joint network - everything is fine, zywall receives requests via port 80 and maps it to the linux host' httpd. However - once trying to go through the NAT from LAN side (in-house, 192.168.123.123 example host) then one gets filtered in the Zywall port 80 firewall. I know this only because port 443 is open for administration interface and https://mydomain1.tld prompts for zywall login. So my conclusion is, that the LAN that accesses 111.111.111.111 in fact are routed to 192.168.1.1 whilst bypassing the NAT table. I need to know how to setup NAT / Policy Route, so that LAN WAN LAN will function with proper network translations instead of doing the 'quick nameserver lookup' or whatever this might be.

    Read the article

  • How would I setup iMail to forward a user's mail to another service w/o leaving a copy locally?

    - by Scott Mayfield
    I have an iMail 2006 server installation in which I have a particular user that has several aliases that all point to a single user (me, for the record). I've been copying all of my mail to GMail and reading it there, but it annoys me that I have to go back weekly and log into my mail account on iMail and delete between 6 and 10 thousand copies of messages I've already received, in order to keep my mailbox from filling up (yes, I have it set with no quota, but I consider it bad form to just let the box grow indefinitely). I've got the copying setup via an inbound user rule, but I'm wondering how to accomplish a "copy and delete" rule. The manual isn't clear on what happens with multiple matching rules (will they be processed in order, or is it a first match situation?) and there isn't a means to combine multiple actions into a single rule. If I use the "forward" action, I THINK that it's going to screw up all the sender information once the mail reaches my GMail account and show it as coming from me instead of the original senders (can anyone confirm that this is accurate?) An easy answer would be to delete my user account entirely, replace it with an alias that maps to my GMail account, but then I would lose my ability to log into the system for admin duties. So that leads me to creating a second, lesser known account for admin use, but since it's a real account, sooner or later I'm going to get mail sent to it and I'll be back to the same situation of having a user account that doesn't get emptied periodically. I imagine I can set the quota to 0 MB to cause all incoming mail to my admin account to bounce, or setup an inbound rule to bounce everything, but this is starting to sound kludgy to me. Does anyone know of a more direct work around to copying a user's incoming mail to an outside server and then deleting the local copy w/o removing their account entirely? Or is this just wishful thinking?

    Read the article

  • How to move my data from my old MacBook Pro to my new one?

    - by Tim Büthe
    I just purchased a new MacBook Pro and already got an 2008 model. I wonder how I move all my data over to the new one. My first idea was, to use my Time Machine backup and restore from it, which seems to be a good idea and should work just fine regarding to this link: http://blog.duncandavidson.com/2008/01/restoring-from-time-machine.html. But, since my current MacBook got older Software on it, like iLife '08 instead of iLife '09 I would have to upgrade this afterwards. Is this correct, or does Time Machine does some magic to exclude well known software? And is it possible to reinstall or upgrade iLife with the included installation DVDs? My second idea is, to just swap the hard drives instead of using the Time machine backup. If it is not too complicated to remove the hdd, this should be the fastest way. This also has the benefit, that the 2008er MacBook then contains a brand new installation and I don't have to remove all my stuff or reinstall Mac OS before I give it away. My question on that second idea would be: does snow leopard handle this stuff correctly? I reboot with the new hardware and all just works fine? So in a nutshell: What would you do: restore from backup or swap drives? And what about the new software?

    Read the article

  • Jenkins CI - Cannot allocate memory

    - by Programmieraffe
    I tested jenkins-ci successfully on a ubuntu 10.4 (with vmware fusion) on my local computer. Now I want to install and use it on my virtual server at hosteurope. The basic installation was no problem, but now I have problems with my build project. After pulling an mercurial update from a repository, ant is invoked and throws the following error in my build project: "Buildfile: /var/lib/jenkins/workspace/concrete5-seed-clean/build.xml [property] java.io.IOException: Cannot run program "/usr/bin/env": java.io.IOException: error=12, Cannot allocate memory" There is a known problem with heap size at virtual servers at hosteurope (http://faq.hosteurope.de/index.php?cpid=13918), so I tried to set the heap size manually: # for ant export ANT_OPTS="-Xms512m -Xmx512m" # jenkins # edited /etc/default/jenkins, added line JAVA_ARGS="-Xms512m -Xmx512m" # restarted jenkins via /etc/init.d/jenkins restart After setting this for ant, the command "ant -diagnostics" runs through and does not cause an error, but the error still occurs when I try to build the project. Server-Details: - http://www.hosteurope.de/produkt/Virtual-Server-Linux-L Ubuntu 10.4 LTS RAM: 1GB / Dynamic 2GB My questions: - Is 1GB enough for Jenkins or do I have to upgrade the server? - Is this error caused by ant or jenkins? Update: I got it running with ant options -Xmx128m -Xms128m, but sometimes the error occurs again. (this freaks me out, cause i can not reproduce it by now :/ ) Help much appreciated! Cheers, Matthias

    Read the article

  • "Server Unavailable" and removed permissions on .NET sites after Windows Update

    - by tags2k
    Our company has five almost identical Windows 2003 servers with the same host, and all but one performed an automatic Windows Update last night without issue. The one that had problems, of course, was the one which hosts the majority of our sites. What the update appeared to do was cause the NETWORK user to stop having access to the .NET Framework 2.0 files, as the event log was complaining about not being able to open System.Web. This resulted in every .NET site on the server returning "Server Unavailable" as the App Domains failed to be initialise. I ran aspnet_regiis which didn't appear to fix the problem, so I ran FileMon which revealed that nobody but the Administrators group had access to any files in any of the website folders! After resetting the permissions, things appear to be fine. I was wondering if anyone had an idea of what could have caused this to go wrong? As I say, the four other servers updated without a problem. Are there any known issues involved with any of the following updates? My major suspect at the moment is the 3.5 update as all of the sites on the server are running in 3.5. Windows Server 2003 Update Rollup for ActiveX Killbits for Windows Server 2003 (KB960715) Windows Server 2003 Security Update for Internet Explorer 7 for Windows Server 2003 (KB960714) Windows Server 2003 Microsoft .NET Framework 3.5 Family Update (KB959209) x86 Windows Server 2003 Security Update for Windows Server 2003 (KB958687) Thanks for any light you can shed on this.

    Read the article

  • How to fix Quicktime video color problems on nVidia chipset and Windows XP?

    - by Matthew Glidden
    My laptop frequently plays video as if in very low-color mode. Though the sound remains clear, it looks terrible, showing only a few shades of red, blue, or yellow. (It's even worse than 8-bit color.) The problem doesn't happen consistently, so I'm looking for troubleshooting advice or known solutions. I use a Dell Latitude D620 laptop with Windows XP, on-board nVidia video chipset, Quicktime, and multiple monitors (laptop screen + VGA-connected LCD). Color problems happen in each application I tried, iTunes, a browser, and the Quicktime standalone player. It doesn't happen right after reboot, so could be from a sleep-wake cycle, or at least being on for an extended period. Google results suggest reinstalling nVidia drivers, which I've done several times with no change. I have found 2 workarounds. Reboot, sacrificing significant time and disrupting work In nVidia control panel, change color to 16-bit, and then back to 32-bit This happens with all video playback, so it's definitely not one corrupt file. I use workaround #2 consistently, but would love a longer-term solution.

    Read the article

  • wget is working only when used with sudo

    - by Yusuf
    I'm having quite a strange behavior with wget since yesterday. I can download files by using sudo wget, but when I try the same file with only wget, I can get this error: yusufh@ubuntu-yuh:~$ wget http://www.kegel.com/wine/winetricks --2010-12-17 09:34:11-- http://www.kegel.com/wine/winetricks Resolving www.kegel.com... failed: Name or service not known. wget: unable to resolve host address `www.kegel.com' and with sudo wget: yusufh@ubuntu-yuh:~$ sudo wget http://www.kegel.com/wine/winetricks --2010-12-17 09:35:37-- http://www.kegel.com/wine/winetricks Connecting to 127.0.0.1:5865... connected. Proxy request sent, awaiting response... 200 OK Length: 190672 (186K) [text/plain] Saving to: `winetricks' 100%[==================================================================================================>] 190,672 --.-K/s in 0.03s 2010-12-17 09:35:37 (6.92 MB/s) - `winetricks' saved [190672/190672] After the comments below, here is an update: I can use Google Chrome or Firefox perfectly without running it as root. I use ntlmaps to connect to the office proxy. So I need to use 127.0.0.1:5865 as the proxy for clients. Result for env | grep -i proxy: NO_PROXY=localhost,127.0.0.0/8,*.local, http_proxy=127.0.0.1:5865 ftp_proxy=127.0.0.1:5865 all_proxy=socks://127.0.0.1:5865/ ALL_PROXY=socks://127.0.0.1:5865/ https_proxy=127.0.0.1:5865 no_proxy=localhost,127.0.0.0/8,*.local while sudo env | grep -i proxy is empty! HELP!

    Read the article

  • Setting up Live @ EDU

    - by user73721
    [PROBLEM] Hello everyone. I have a small issue here. We are trying to get our exchange accounts for students only ported over from an exchange server 2003 to the Microsoft cloud services known as live @ EDU. The problem we are having is that in order to do this we need to install 2 pieces of software 1: OLSync 2: Microsoft Identity Life cycle Manager "Download the Galsync.msi here" the "Here" link takes you to a page that needs a login for an admin account for live @ EDU. That part works. However once logged in it redirects to a page that states: https://connect.microsoft.com/site185/Downloads/DownloadDetails.aspx?DownloadID=26407 Page Not Found The content that you requested cannot be found or you do not have permission to view it. If you believe you have reached this page in error, click the Help link at the top of the page to report the issue and include this ID in your e-mail: afa16bf4-3df0-437c-893a-8005f978c96c [WHAT I NEED] I need to download that file. Does anyone know of an alternative location for that installation file? I also need to obtain Identity Lifecycle Management (ILM) Server 2007, Feature Pack 1 (FP1). If anyone has any helpful information that would be fantastic! As well if anyone has completed a migration of account from a on site exchange 2003 server to the Microsoft Live @ EDU servers any general guidance would be helpful! Thanks in advance.

    Read the article

  • zsh auto-complete event designator

    - by simont
    (See my previous question for additional context). I'm migrating to zsh from bash, and using oh-my-zsh. When my zsh history looks something like the following: git status git add -A git commit I want to be able to re-run git add -A. To do that, I could use !?git add, which should: !?str[?] Refer to the most recent command containing str. The trailing ‘?’ is necessary if this reference is to be followed by a modifier or followed by any text that is not to be considered part of str. The link for zsh event designators is here. Unfortunately, I can't do this - as I'm typing !?git add, as I hit the ' ', it auto-completes the command to the most recent command matching git (ie, it auto-completes with git commit). I can't use the event designator properly because of this auto-completion as I hit the space. I assume this is an oh-my-zsh feature. I have no idea where to look, though - greping for 'complet' in the oh-my-zsh source doesn't get me anywhere. My question: how do I turn off this feature? Or, if that's not something that's known, where should I be looking - if I was going to implement this auto-complete when whitespace is entered, where would be a logical place to do so in the oh-my-zsh framework?

    Read the article

  • Samba4/Ubuntu Shares Incorrectly Available to All Users

    - by Dan
    I've got my Ubuntu server working with Samba4 and got it set up as the Primary domain controller on my network with AD and all that goodness. However, I'm trying to get my Samba configuration to work with the users and groups I've defined with the Active Directory tools from Windows. For instance, I've got a share X which I want users A and B (as part of the 'management' group, known as LLGrpManager in my setup) to see, but no body else. However, after making changes to the configuration, restarting Samba, I test by connecting to the share with my Mac over Samba as user 'C' which isn't part of the management group, and I can, incorrectly, see the X share. I've tried alsorts of combinations of specifying the group with no luck at all. I've got a feeling that my global config might be too lenient or something to do with file permissions but being a bit green, I'm without clue. My /etc/samba/smb.conf # Global parameters [global] server role = domain controller server string = Office Server workgroup = LLDOMAIN realm = lldomain.local netbios name = DUMBO passdb backend = samba4 logon path = \\%L\profiles\%U logon drive = L: log file = /var/log/samba/%m.log max log size = 50 security = ads domain logons = yes domain master = auto usershare allow guests = no valid users = %S [netlogon] path = /var/lib/samba/sysvol/lldomain.local/scripts read only = no guest ok = no [sysvol] path = /var/lib/samba/sysvol read only = No guest ok = no valid users = @LLDOMAIN\LLGrpManager [ShareX] path = /data comment = Entire Data Volume guest ok = no comment = Entire Data Volume guest ok = no valid users = @LLDOMAIN\LLGrpManager admin users = @LLDOMAIN\LLGrpManager browsable = no inherit acls = yes inherit permissions = yes ... My /etc/nsswitch.conf I've also instructed the system to use the nss winbind library when searching for users or groups by adding the stanza passwd and group in /etc/nsswitch.conf: passwd: compat winbind group: compat winbind shadow: compat Permissions on the folder in question drwxrwxrwt 8 root root 4.0K Oct 28 19:11 data

    Read the article

  • 13" MacBook Pro with Win 7 and External VGA gets 640x480

    - by Jim McKeeth
    I have a brand new 13" MacBook Pro - 2.26 GHz and the NVIDIA 9400M Video card. I installed Windows 7 (final) in boot camp and booted up to Windows 7. Installed all the drivers from the Apple disk and it was working great. Then I attached the external VGA adapter (from apple) to connect to a projector and it dropped down at 640x480 resolution. No matter what I did it wouldn't let me change to a higher resolution if the external VGA was connected. Once it disconnects then it goes back to the normal resolution. If I am booted into Snow Leopard it works fine. I tried updating the NVIDIA drivers and it behaved exactly the same. Ultimately I want to get 1024x768 or better resolution when connected to an external display. If it isn't fixable then I am curious if anyone else has seen this, if it is a known issue, and who to contact for support (Apple, Microsoft or NVIDIA?) Update: Just attaching the Mini-DVI to VGA adapter kicks it into 640x480, no projector is required. I tried forcing the display driver from Generic PnP Monitor to one that supported 1024x768 and that didn't work either.

    Read the article

  • How do I setup routing for 2 companies with different Internet connections on the same LAN?

    - by Clint Miller
    Here's the setup: 2 companies (A & B) share office space and a LAN. A 2nd ISP is brought in and company A wants it's own Internet connection (ISP A) and company B wants it's own Internet connection (ISP B). VLANs are deployed internally to separate the 2 company's networks (company A: VLAN 1, company B: VLAN 2, shared VOIP: VLAN 3). With separate VLANs it's simple enough to use separate DHCP servers (or separate scopes on the same server) to assign the default gateway to each company's gateway for their Internet connection. Static routes can be created on each gateway to point traffic destined for the other company's VLAN or the voice VLAN so that all nodes are reachable as expected. However, I think this is a form of asymmetrical routing, right? (The path from node A1 to node B1 is not the same as the path back from node B1 to node A1). Can I setup policy-based routing to correct this? In that case, can I assign the same default gateway to every device on all VLANs and create a routing policy on a L3 switch to look at the source address and forward traffic to the appropriate next hop? In that case, I want the routing logic to go like this: If the destination address is known, forward the traffic (traffic destined for a different VLAN). If the destination address is unknown, forward the traffic to ISP A's gateway if the source address is on VLAN A; or forward the traffic to ISP B's gateway if the source address is VLAN B. Am I thinking about this problem in the correct way? Is there another way to solve this problem that I am overlooking?

    Read the article

  • Sandra reports my CPU as "Engineering Sample", how can I be sure this is correct?

    - by stevenvh
    I ran SiSoftware's Sandra on my new PC, and for my CPU it reports: Generation : G8 / T29 Name : TN0 (Trinity) FX/Opteron 32nm (ES) Revision/Stepping : 0 : 10 / 1 Stepping Mask : TN-A1 Microcode : MU6F10010F The (ES) is a well-known code in product development, meaning "Engineering Sample". Those are beta versions of the CPU, which still may contain some bugs, or even have features switches off. I contacted both the PC's manufacturer Medion as well as AMD about this. I had to downvote the Medion helpdesk here. The person I talked to boldly said Sandra was wrong (without knowing how Sandra got this information; he didn't even know the software), and used the word "impossible". His conclusion was "We’re not taking this in consideration for service”. Right. So, if you like Medion for their good prices, but like good support even better, you may consider buying your PC elsewhere. AMD was more helpful, but wanted to be sure before replacing the part (which I find reasonable). They suggested that I dismount the cooler from the CPU to check what was printed on it to be sure. I'm a bit reluctant here: I would have to wipe the thermal paste from the CPU, and won't know for sure my cooling will still be OK afterwards. Questions Has anybody actually found a confirmed ES CPU in her PC? Is anybody aware of Sandra erroneously reporting CPUs as Engineering Samples? How can you tell an ES, apart from the print on the package? Shouldn't Stepping Mask identify the CPU uniquely?

    Read the article

  • Is there a way in Windows 7 to disable "journaling"?

    - by Psycogeek
    C:\$extend\$Usn.Jrnl:$J:$data Here is a picture finally. The large strip in the center of the top band is the largest chunk, in the other, grey areas are the various clusters with it. On the right, the big long grey line is $logfile (not paging), and it is 63&nbsb;MB. Paging, 500&nbsb;MB is the dark cyan chunk, next to the yellow MFTres in the inner rings.. The disk was defragged so they could be seen easier. Not all clusters of this type of file are tagged, but the idea is there. The disk is 4k clusters, now about 12 GB size. Each cute little block in the picture is .81 MB and represents 207 clusters. The dkGreen section, is mostly the whole Winsxs pile, also interesting when they keep telling us it doesn't take much disk space. Wikipedia suggests that in previous NT systems "USN journaling" would be turned on when enabled (assumes it could also be turned off?). What aspects, services, or program is working on putting that stuff all over the disk which is known by $jrnl$ type clusters, even if it is not actual USN journaling? Is it possible in a Windows 7 system to completly disable the journaling, and what would be the ramifications of that? On a Windows XP NTFS system, I do not recall seeing the quantity of disk clusters used with these $jrnl$ names, so I do not recall this being necessary in this quantity for an NTFS file system itself? I understand that it would not be there, if it did not have a useful function :-) Information about how wonderful is fine, if that information will help track down what parts of the system create and use it. Change Journals states: Change journals are also needed to recover file system indexing Hmm, that might explain some of them, or why it was left on the disk. A crash while background indexing?

    Read the article

  • Have an unprivileged non-account user ssh into another box?

    - by Daniel Quinn
    I know how to get a user to ssh into another box with a key: ssh -l targetuser -i path/to/key targethost But what about non-account users like apache? As this user doesn't have a home directory to which it can write a .ssh directory, the whole thing keeps failing with: $ sudo -u apache ssh -o StrictHostKeyChecking=no -l targetuser -i path/to/key targethost Could not create directory '/var/www/.ssh'. Warning: Permanently added '<hostname>' (RSA) to the list of known hosts. Permission denied (publickey). I've tried variations using -o UserKnownHostsFile=/dev/null and setting $HOME to /dev/null and none of these have done the trick. I understand that sudo could probably fix this for me, but I'm trying to avoid having to require a manual server config since this code will be deployed on a number of different environments. Any ideas? Here's a few examples of what I've tried that don't work: $ sudo -u apache export HOME=path/to/apache/writable/dir/ ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=path/to/apache/writable/dir/.ssh/known_hosts -l deploy -i path/to/key targethost $ sudo -u apache ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=path/to/apache/writable/dir/.ssh/known_hosts -l deploy -i path/to/key targethost $ sudo -u apache ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -l deploy -i path/to/key targethost Eventually, I'll be using this solution to run rsync as the apache user.

    Read the article

  • "Server Unavailable" and removed permissions on .NET sites after Windows Update

    - by tags2k
    Our company has five almost identical Windows 2003 servers with the same host, and all but one performed an automatic Windows Update last night without issue. The one that had problems, of course, was the one which hosts the majority of our sites. What the update appeared to do was cause the NETWORK user to stop having access to the .NET Framework 2.0 files, as the event log was complaining about not being able to open System.Web. This resulted in every .NET site on the server returning "Server Unavailable" as the App Domains failed to be initialise. I ran aspnet_regiis which didn't appear to fix the problem, so I ran FileMon which revealed that nobody but the Administrators group had access to any files in any of the website folders! After resetting the permissions, things appear to be fine. I was wondering if anyone had an idea of what could have caused this to go wrong? As I say, the four other servers updated without a problem. Are there any known issues involved with any of the following updates? My major suspect at the moment is the 3.5 update as all of the sites on the server are running in 3.5. Windows Server 2003 Update Rollup for ActiveX Killbits for Windows Server 2003 (KB960715) Windows Server 2003 Security Update for Internet Explorer 7 for Windows Server 2003 (KB960714) Windows Server 2003 Microsoft .NET Framework 3.5 Family Update (KB959209) x86 Windows Server 2003 Security Update for Windows Server 2003 (KB958687) Thanks for any light you can shed on this.

    Read the article

  • I receive email not addressed to me - virus?

    - by Anne
    Every once in a while I receive email (on Gmail) that isn't addressed to me. Gmail puts it in the spam box, because it 'can't verify that it has been sent by [sender]'. The emails, when opened, contain confidential information about deliveries and paid bills (it does look an awful lot like 'real' mail from well-known companies, and it doesn't look like a scam, since the mail is informative - they give information instead of asking for credit card numbers ;-)), and I even got an email from "Facebook" that I requested a password change and that I have to 'click here' to change the password for [email address that isn't mine]. I am not the only addressee, there seems to be a whole list of Gmail addresses beginning with 'a'. The original addressee obviously has some sort of virus, and now I wonder if this could be a risk for me too. Is my email being sent around without my knowing too? I am not the kind of person who randomly clicks on shady links - I am very careful on the internet - but maybe there are other ways of catching viruses? Is there something I should do/check? Thank you for your help!

    Read the article

  • Windows 7 won't boot from any bootloader except for Windows Boot Manager after partition resize

    - by user2468327
    I have a triple boot system on a single SSD: OSX, Windows 7, and Ubuntu. I use Chimera (basically another version of Chameleon) as my bootloader. Usually I can boot all 3 OSs without any issue, but after using GParted to make my Ubuntu partition 2 Gigs larger, Windows 7 throws me an error when trying to boot to it from either Chimera or Grub. The error is consistently: `0xc000000e can't find \Boot\BCD" (slightly paraphrased). However, I can still get into Windows by selecting Windows Boot Manager from the boot options in my BIOS. I've already tried several known fixes for similar issues, including bootrec /rebuildbcd (and variations), and BootRec.exe/fixMBR + BootRec.exe/fixBoot. I've also tried Chkdsk. At best this has made it so Windows 7 boots on its own by default (making me have to reinstall Chimera and change back my boot settings in the BIOS). At worst this made it so Windows won't boot period. Now I'm back full circle where I started. A detail that might be useful is that bootrec /rebuildbcd says that the number of found Windows installations is 0. I'm fairly certain that I don't have a hybrid MBR. Mainly because I have a UEFI BIOS, and with that, it appears each OS can support a GPT. So it would kind of pointless to have and deal with. I may be wrong though, I couldn't find any way of finding out for sure online. However, I know for sure that the version of Windows I have installed is the UEFI version, as well as every partition tool I've used to look at my boot drive tells me it's GPT. How do I get it back so I can boot Windows 7 through another bootloader so I don't have to manually select it in the BIOS? Preferably without a reinstall.

    Read the article

  • get-eventlog issue

    - by Jim B
    I wanted to get a quick report of some log entries I saw on a server, so I ran: Get-Eventlog -logname system -newest 10 -computer fs1 | fl I got events back however the descriptions were all wrong. Here's an example: Index : 1260055 EntryType : Warning InstanceId : 2186936367 Message : The description for Event ID '-2108030929' in Source 'W32Time' cannot be found. The local compute r may not have the necessary registry information or message DLL files to display the message, or you may not have permission to access them. The following information is part of the event:'time. windows.com,0x1' Category : (0) CategoryNumber : 0 ReplacementStrings : {time.windows.com,0x1} Source : W32Time TimeGenerated : 1/25/2010 10:43:31 AM TimeWritten : 1/25/2010 10:43:31 AM UserName : Note that if I pull the event ID property it's correct (in this case 38) Is this is known issue or is something wrong. The messages resolve fine via event viewer locally and remotely Here is the powershell version info: Name : ConsoleHost Version : 2.0 InstanceId : bc58fcf8-bba3-4ca8-8972-17dbd5d9ff08 UI : System.Management.Automation.Internal.Host.InternalHostUserInterface CurrentCulture : en-US CurrentUICulture : en-US PrivateData : Microsoft.PowerShell.ConsoleHost+ConsoleColorProxy IsRunspacePushed : False Runspace : System.Management.Automation.Runspaces.LocalRunspace Here is the revised version info: Name Value ---- ----- CLRVersion 2.0.50727.3603 BuildVersion 6.0.6002.18111 PSVersion 2.0 WSManStackVersion 2.0 PSCompatibleVersions {1.0, 2.0} SerializationVersion 1.1.0.1 PSRemotingProtocolVersion 2.1

    Read the article

  • IIS 7.0 - responses throttled to 500ms blocks?

    - by Julia Hayward
    Scenario: ASP.NET MVC wep app sitting on my local machine (Vista Ultimate, IIS 7.0), nothing going on except one user (me) logged in and viewing an index page. The page includes 9 dynamic images drawn from the underlying DB and returned from a controller action. I have got the actual processing time for these images down to 15ms each. Turn on Firebug and watch the page load. What I see is 9 requests for images firing off together – no surprise – but four come back to me almost immediately; two more after 0.5s; another after 1s; then at 1.5s and 2s. Logging on the server side suggests the individual responses are still only taking 15ms. So it appears IIS is queueing things up into 500ms chunks. (Repeating the experiment produces different results, but each time the images return in similar blocks – you might get three in the first group, then three at 0.5s, two at 1s etc, for example – and it’s always at 500ms intervals, not anything else.) It’s also repeatable cross-browser, and it’s not repeatable with other forms of content. I haven't found any particular mention of this problem out there, so I'm sort of assuming it's not an IIS bug, so is it: i) IIS on desktop OSs deliberately does it, to make you use server OSs in production? ii) There is some magical setting that has eluded me for as long as I’ve known IIS? iii) Something peculiar to MVC or SQL Server 2008? or something else?

    Read the article

  • Splitting build cross the network?

    - by Dandikas
    Is there a known solution for splitting build process cross the network machines? Use case: We are an average software development company. We own around 50 development workstations (Quad Core 2.66Ghz, 4 GB ram, 200 GB raid). No need to tell that at any single moment not every machine is loaded to the max. There are 5 to 15 projects running simultaneously at any single moment. Obviously all of them are continuously build on server, than deployed to proper environment. Single project build is taking from 3 to 15 minutes. The problem: Whenever we build 5 projects in a row the last project is going to be ready after around 25 - 50 minutes. Building in parallel does not solve the problem (build is only a part of the game, than you need to deploy, run tests etc.) YES the correct solution is to add another build server, but "That involves buying new Expensive hardware, and we already spent a lot!". Yea, right(damn them)! Anyway. What about splitting build among developers workstation? Lets say whenever we need to build project "A" we check 5 workstations and start build on all that are not overloaded. The build can be canceled by a developer if he really needs all the power of his machine as long as there is at least 1 machine that is still building. After build is finished deployment can be performed to a proper environment (hosted on some server, not on workstation :) ). The bigger the company the more this makes sense to me. Anyone tried something like this? Are there any good practices? Any helpful software?

    Read the article

  • TFTP Timing Out on Ubuntu VM

    - by valsidalv
    I'm running a Windows 7 PC with VMware installed which has my Ubuntu (10.04 Lucid Lynx). I recently installed a DHCP server and TFTP (Xinet tftpd) using these instructions. I've mapped a network drive so that my Windows has access to all the files in my VM through a 192.x.x.x IP address. I'm trying to throw some custom firmware onto a router. The router has its own built-in TFTP utility that will download the image. It successfully manages to do everything but it is slow because it writes it to flash memory. There is another method that is much quicker because it writes to RAM directly but it must use the TFTP server in Ubuntu. The issue I'm facing is that the Ubuntu TFTP transfer seems to be timing out. The transfer starts but never goes past ~60%. Here's my /etc/xinetd.d/tftp file (similar to a known working config): service tftp { protocol = udp port = 69 socket_type = dgram wait = yes user = nobody server = /usr/sbin/in.tftpd server_args = -s /home/user/tftp/ disable = no cps = 300 2 per_source = 60 } I've done some searching but can't find any parameters for this file to control timeout time or number of retries. The last two arguments (cps, per_source) and completely alien to me (can anyone explain). I have a few possible solutions but the easiest would be to get this TFTP server working. Can anyone help? Either with a timeout configuration or maybe even recommend a different TFTP server? Thanks!

    Read the article

< Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >