Search Results

Search found 16218 results on 649 pages for 'compiler errors'.

Page 501/649 | < Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >

  • How to handle sh: fetch: command not found

    - by Tyler Johnson
    Okay, I'm a noobie. I know how to build and compose a website, but I have no idea what I'm doing when it comes to servers and server commands, etc. I've recently had a problem with all of my sites on our servers going down all at once and then I have to go in and reboot the server for them to come up again. At first this was annoying, but now it is becoming agonizing as it now takes 3-4 reboots for the websites to come back up. I contacted support for my hosting, but they are not being very helpful. They just keep telling me what the issue might be and basically telling me that I'm going to have to look into it and figure it out, which really isn't possible since I know nothing. Anyway, here are the things they said were possible reasons: They said I have "strange logs" in my Apache webserver log, error: sh: fetch: command not found. My php.ini memory limit is: 256M which is very high. It should be 32M or 64M. Server is reaching Max Clients, meaning we have more than 150 visitors at a time. (They supposedly "fixed" this, but the sites/server are still going down) I have some Wordpress sites with plugins getting errors like: PHP Warning: pack(): Type H: illegal hex digit G in... PHP Fatal error: Cannot use object of type stdClass as array in... PHP Fatal error: Maximum execution time of 30 seconds exceeded in... PHP Fatal error: Call to undefined function file_exists() in... PHP Parse error: syntax error, unexpected '<' I know that's a lot, but I really am at wits end and have no idea what to do now. If anyone could maybe give me some advice or point me in the right direction I would greatly appreciate it! Thanks! Oh, and here are the specs for my server: RAM: 2048MB CPU Shares: 40 Primary Disk: 50GB Data Transfer: 75GB Port Speed: 5Mbps

    Read the article

  • 2011 i7 Macbook Pro unable to boot from any Windows CD?

    - by Craig Otis
    I'm encountering issues installing Windows alongside my Lion install. I'm attempting to install from the internal SuperDrive, after using Boot Camp to partition what was a single, HFS+ volume. When holding down Option at boot, the CD appears in the startup list, but upon selecting it, I get a gray screen for 5 minutes, then a flashing white folder. I tried installing rEFIt and using this to boot the CD, but I receive an error about "Not Found" being returned from the "LocateDevicePath", and a mention of the firmware not supporting booting using legacy methods. In the Console, when opening the StartupDisk preference pane (which never presents the CD as a selectable option), I see: 11/25/11 4:39:31.159 PM System Preferences: isCDROM: 0 isDVDROM:1 11/25/11 4:39:31.159 PM System Preferences: mountable disk appeared: /Volumes/GRMCPRFRER_EN_DVD 11/25/11 4:39:33.214 PM System Preferences: - So far so good, passing disk to System Searcher. 11/25/11 4:39:33.218 PM System Preferences: OSXCheck: No boot.efi in System Folder or volume root. 11/25/11 4:39:33.220 PM System Preferences: WinCheck: Not a valid windows filesystem: /Volumes/GRMCPRFRER_EN_DVD 11/25/11 4:39:33.220 PM System Preferences: WinCheck: Not a valid windows filesystem: /Volumes/GRMCPRFRER_EN_DVD I'm at a loss here. I've done my research, but it sounds like most of the rEFIt errors of this nature are caused by installing from a thumbdrive, or an external drive. I'm using the internal SuperDrive. Also, I've tried this with two different disks: A Windows XP SP2 CD A Windows 7 x86 DVD Both are disks I've had around for years, and I've used them reliably in the past. The system is an early 2011 15" Macbook Pro, all firmware updates installed.

    Read the article

  • Port 5357 TCP on Windows 7 professional 64 bit?

    - by Registered
    Is there a reason this port is open, a quick Nmap scan and Nessus scan reveal it's open, why? Are there any ramifications if I close this port via the firewall rule set? Or does anyone here now more info about this port besides Google? WTF? 1)http://www.symantec.com/connect/blogs/who-left-tunnel-door-open-windows-firewall-vista-0 I know the talk is about Vista, but I am pretty sure it's the same port on 7, also. 2)Port 5357 common errors:The port is vulnerable to info leak problems allowing it to be accessed remotely by malicious authors. (Web Services for Devices) I am blocking this crap, if I have issues will just re-enable. Damn windows. Inbound rule for Network Discovery to allow WSDAPI Events via Function Discovery. [TCP 5357] You just got blocked, until I break something, will see. Time to re-Nmap and re-Nessus. Nmap scan 0 open ports after closing Port 5357,Win7 still works for now, one more scan with Nessus just to make sure all is well.

    Read the article

  • Access to File being restricted after Ubuntu crashed

    - by Tim
    My Ubuntu 8.10 crashed due to the overheating problem of the CPU when I am opening some directory and intend to do some file transfer under Nautilus. After reboot, under gnome, all the files cannot be removed, their properties cannot be viewed and they can only be opened, although all are still fine under terminal. I was wondering why is that and how can I fix it? Thanks and regards UPdate $ cat /etc/mtab /dev/sda7 / ext3 rw,relatime,errors=remount-ro 0 0 tmpfs /lib/init/rw tmpfs rw,nosuid,mode=0755 0 0 /proc /proc proc rw,noexec,nosuid,nodev 0 0 sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0 varrun /var/run tmpfs rw,nosuid,mode=0755 0 0 varlock /var/lock tmpfs rw,noexec,nosuid,nodev,mode=1777 0 0 udev /dev tmpfs rw,mode=0755 0 0 tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0 devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=620 0 0 fusectl /sys/fs/fuse/connections fusectl rw 0 0 lrm /lib/modules/2.6.27-15-generic/volatile tmpfs rw,mode=755 0 0 /dev/sda8 /home ext3 rw,relatime 0 0 /dev/sda2 /windows-c vfat rw,utf8,umask=007,gid=46 0 0 /dev/sda5 /windows-d fuseblk rw,allow_other,blksize=4096 0 0 securityfs /sys/kernel/security securityfs rw 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,noexec,nosuid,nodev 0 0 gvfs-fuse-daemon /home/tim/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,user=tim 0 0

    Read the article

  • What is the ideal way to set up multiple FTP enabled web accounts on Fedora?

    - by Nicholas Flynt
    I'm setting up a test server for use as a web development platform, and I'd like to mimic as closely as I can a typical shared hosting setup. That is, I'd like my server to have multple user FTP accounts, each of which links to a directory containing the webroot of the site, and I'd like apache to be able to easily see and manupulate these files. I'll admit: I'm not as familiar with Fedora as I'd like, I run Ubuntu on my home box and SElinux is giving me some grief. My initial plan was to have each user FTP into their home directory, and put the web directory there as well, but SElinux throws a hissy fit when apache tries to access anything outside of its web directory, so that plan was a no go. Would it be wise to continue this route, and perhaps mount web directories in user home folders so that FTP could still be used to access them, even though apache saw them in var/www like it expects? Would it make more sense to set up custom FTP accounts and use a single FTP user on the server box? What's the general course of action on something like this? I'm using vsftpd right now to host web directories, which is why I'm liking the home directory approach (it's simple and secure) but of course there's bound to be a better way to go about it. Thanks. (I'll leave other things, like restricted DB access and such, to another post. I'm interested right now with just getting FTP and apache to play nice in a multi-user environment.) PS: For the record, an issue I ran into when doing all of this was that if apache isn't running as the same user as the FTP account is saving as, there are permissions errors when FTP creates files, requiring the remote user to chmod the files to fix it. A logical fix would be to run apache in a special group, put all web users in this group, and have FTP access default to giving this group read/write access to everything like apache would expect, but I never could figure out how to accomplish this. Bonus points and cake if you know a solution.

    Read the article

  • Users removing Administrator from files/folders permissions

    - by user64204
    We're running Windows Server 2003 R2 with Active Directory and are having an issue with network shares whereby users, in an attempt to secure their documents, remove everybody (including the Administrator account) from their files/folders permissions. Since the Administrator no longer has read permission to them, we can't even backup files manually as we get permission errors. One solution that we've found is to change the owner of the files and directories to the Administrator account. We can then change the permissions as we wish. The problem is that this has to be done manually so can't really be applied to an entire share. Another solution that we've tried is to use cacls as follows: cacls d:\path\to\share /C /T /E /G Administrator:F The problem with this is that we're still getting an ACCESS DENIED error on files/folders on which Administrator was removed. Q1: Is there a way to restore at least read access to all files/folders to the Administrator account in a recursive fashion? That would be for the short term. For the long term we're looking for a solution to prevent users from removing Administrator from files/folders permissions. Since we're going to migrate to Windows Server 2008 R2 soon we could wait until we've migrated to implement such solution if need be. Q2: Is there a way to prevent users from removing Administrator from files/folders permissions on Windows Server 2003/2008?

    Read the article

  • Windows 7 BSOD when changing power plan

    - by dd5
    i have a strange problem. When i want to change the power plan on my laptop from High performance to Balanced, Windows freezes and i get bsod. The power plan settings are all default. Laptop specs: - Intel Core i3 330M/350M - Intel® HM55 Express Chipset - DDR3 1066 MHz SDRAM 8GB - ATI Mobility™ Radeon HD5730 1GB DDR3 VRAM - Intel SSD330 128gb - Windows 7 Home premium I've searched the internets but couldnt find a similar issue. BSOD first started when i installed this SSD and stopped when i've updated the chipset controller driver then started again yesterday when i wanted to change the power settings plan.Minidump file here. Any help with this weird issue appriciated, thanks. Edit: - i've ran Memory diagnostic tool, - Intel SSD diagnostics - and updated the firmware to 3.2.1. Non of these steps worked or shown signs of errors - but still got BSOD when changing power plan settings. After analizing the dump file via osronline.com here a first few lines: CRITICAL_OBJECT_TERMINATION (f4) A process or thread crucial to system operation has unexpectedly exited or been terminated. Several processes and threads are necessary for the operation of the system; when they are terminated (for any reason), the system can no longer function. Arguments: Arg1: 0000000000000003, Process Arg2: fffffa8008661b30, Terminating object Arg3: fffffa8008661e10, Process image file name Arg4: fffff800033de270, Explanatory message (ascii) -- Solution -- Provided by Vinayak: After installing the Intel Rapid storage Technology from MajorGeeks, i didn't experience a BSOD since, thank you :)

    Read the article

  • Windows 7 boots to black screen with blinking cursor

    - by murgatroid99
    I have an Alienware M17x that dual boots into Ubuntu 11.04 and Windows 7 Home Premium. Currently, the computer starts at the GRUB loader and will boot into Ubuntu, but if I try to boot into Windows, I immediately get a black screen with a blinking cursor in the upper left corner. The output of fdisk -l is Device Boot Start End Blocks Id System /dev/dm-0p1 1 5 40131 de Dell Utility Partition 1 does not start on physical sector boundary. /dev/dm-0p2 6 1918 15360000 7 HPFS/NTFS Partition 2 does not start on physical sector boundary. /dev/dm-0p3 * 1918 64772 504878877+ 7 HPFS/NTFS Partition 3 does not start on physical sector boundary. /dev/dm-0p4 64772 77827 104858625 5 Extended Partition 4 does not start on physical sector boundary. /dev/dm-0p5 64772 67204 19531008 83 Linux /dev/dm-0p6 67204 74498 58593536 83 Linux /dev/dm-0p7 74498 77577 24731648 83 Linux /dev/dm-0p8 77578 77827 2000128 82 Linux swap / Solaris I have used the Windows rescue CD, and run the automatic error fixer until it finds no errors. I have run chkdsk /R on both the main Windows 7 (/dev/dm-0p3) partition and the recovery partition (/dev/dm-0p2). I set the main Windows 7 partition to be active. I also tried running in the recovery console the commands bootrec /fixmbr bootrec /fixboot bootrec /rebuildbcd None of these helped and the last set of commands deletes grub, which I then have to reinstall from Ubuntu. I think the last thing I did in windows before this started was install the newest ATI driver for my video card. This would suggest using system restore, and I actually had a restore point earlier (after the problem started), but after whatever I did that restore point does not appear in the list on the recovery disk any more, so I cannot do a system restore. Is there anything else I can try to make Windows boot properly again? Edit: Running the suggested commands bootsect /nt60 c: bcdboot c:\windows /s c: was also ineffective.

    Read the article

  • IE6 does not follow 302 redirect - displays 404 instead

    - by Dexter
    One of our clients has reported that they are experiencing 404 (file not found) errors when attempting to navigate a website that we support. The behaviour only appears to affect her - other users on the same machine can navigate the website fine, but the problem follows her from one PC to another. I've had a good look through the IIS server logs and have identified the requests in question. The normal request pattern is as follows: POST /page.aspx - 80 - ... 401 1 0 POST /page.aspx - 80 DOMAIN/user ... 302 0 0 GET /anotherPage.aspx Request=833f80a5-f34c-4b0e-addb-d73e1ee1663a 80 - ... 401 1 0 GET /anotherPage.aspx Request=833f80a5-f34c-4b0e-addb-d73e1ee1663a 80 DOMAIN/user ... 200 0 However, requests for the affected user do not include a request for the redirected page, nor an entry for the 404, i.e.: POST /page.aspx - 80 - ... 401 1 0 POST /page.aspx - 80 DOMAIN/user ... 302 0 0 ... other unrelated requests Can anyone suggest what might trigger this behaviour, and how I might investigate the cause or prevent it from occuring? I read here that the Allow META refresh option in IE6 might trigger this behaviour, but I have not been able to replicate the behaviour by modifying this setting only.

    Read the article

  • Postfix cannot deliver mail to Cyrus mailbox on Ubuntu 11.10 server

    - by user105804
    I have installed and configured Postfix and Cyrus IMAP server with webcyradm according to this document - http://www.delouw.ch/linux/Postfix-Cyrus-Web-cyradm-HOWTO/html/index.html . I can access webcyradm interface, I can create new domains and new users, and I can login via IMAP after creating the user account. However, Postfix fails to deliver mail to cyrus mailboxes. Mail log contains errors shown below. Installing any IMAP server other than cyrus is not an option because it is needed by the web application. Please advise me how to make Postfix deliver email to cyrus mailboxes. The solution should not necessary include web-cyradm, but there should be a web interface for managing mail domains and mailboxes as user-friendly as possible. Dec 30 22:46:17 acer-tower cyrus/lmtpunix[4865]: accepted connection Dec 30 22:46:17 acer-tower cyrus/lmtpunix[4865]: lmtp connection preauth'd as postman Dec 30 22:46:17 acer-tower postfix/cleanup[4868]: 065D5240035: message-id=<[email protected]> Dec 30 22:46:17 acer-tower cyrus/lmtpunix[4865]: verify_user(user.imap0001) failed: Mailbox does not exist Dec 30 22:46:17 acer-tower postfix/bounce[4867]: 6C6CA24185C: sender non-delivery notification: 065D5240035 Dec 30 22:46:17 acer-tower postfix/qmgr[4833]: 065D5240035: from=<>, size=3372, nrcpt=1 (queue active) Dec 30 22:46:17 acer-tower postfix/qmgr[4833]: 6C6CA24185C: removed Dec 30 22:46:17 acer-tower postfix/lmtp[4866]: 53421240372: to=<[email protected]>, orig_to=<[email protected]>, relay=home.webshop-software.ch[/tmp/lmtp], delay=165, delays=165/0.02/0.17/0.09, dsn=5.1.1, status=bounced (host home.webshop-software.ch[/tmp/lmtp] said: 550-Mailbox unknown. Either there is no mailbox associated with this 550-name or you do not have authorization to see it. 550 5.1.1 User unknown (in reply to RCPT TO command))

    Read the article

  • Prevent SSL certificate being returned for a specific domain

    - by jezmck
    Apologies for a long question: We've taken on a new client whose web hosting was previously on their in-house server which still has their Exchange/Outlook email. We now host their domain (and many others) on our server. They're complaining that they're getting errors in Outlook. I don't understand the AutoDiscover stuff at the root of the problem, but believe that I just need to stop the SSL certificate on our server being returned when requested at a particular domain: Yes it is, the issue lies with "{newclient}.com" being pointed to your server IP and that server has Port 443 open with an SSL certificate associated to it. So when Outlook/ActiveSync use autodiscover to find the mailbox settings it find your SSL (because 443 is open) and flags it as an error. The solution is to close 443 so its not discovered, Autodiscover will then proceed to mail.{newclient}.com via the MX / ServiceRecords and discover the correct SSL. I'm new here and there was no hand-over, so I don't know whether other currently hosted sites need to accept SSL connections, though I suspect some will, or may in future. This is a live server, so I can't risk trying loads of options in case I take the server offline! I feel like I should be adding something like the following to vhosts.conf. <VirtualHost *:443> ServerName {newclient}.com ServerAlias www.{newclient}.com SSLEngine Off SSLCertificateFile {NONE} SSLCertificateKeyFile {NONE} </VirtualHost> Apologies for the fact that I don't know enough about this subject to be able to ask the question more clearly!

    Read the article

  • WebHost Manager - Apache's VHost isn't matching the DNS entry

    - by Trans
    I've used CPanel's WebHost Manager to create a new host on my VPS. I then used my HOSTS file to point fake.com to the relevant IP address. The problem I'm having now is, Apache isn't recognizing the VHost,or something, as it's just loading the default entry and 404'ing every document I try to GET. Here's the VHost entry NameVirtualHost 0.0.0.209:80 NameVirtualHost 0.0.0.211:80 <VirtualHost 0.0.0.209:80> ServerName fake.com ServerAlias www.fake.com DocumentRoot /home/fakecom/public_html ServerAdmin [email protected] ## User fakecom # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup fakecom fakecom </IfModule> <IfModule !mod_disable_suexec.c> SuexecUserGroup fakecom fakecom </IfModule> CustomLog /usr/local/apache/domlogs/fake.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." CustomLog /usr/local/apache/domlogs/fake.com combined Options -ExecCGI -Includes RemoveHandler cgi-script .cgi .pl .plx .ppl .perl ScriptAlias /cgi-bin/ /home/fakecom/public_html/cgi-bin/ </VirtualHost> I've Google'd this profusely and all that's being returned is 'DNS errors. Wait for it to propagate'. That's obviously not my problem, since I'm using HOSTS. What else could be causing this? :/ EDIT: Forgot to mention. http://fake.com/~fakecom/test.html loads just fine. So the fake.com is pointing to the right IP.

    Read the article

  • Is this "cache administrator" error my server's problem?

    - by Eoin
    Hey, I have a CentOS VPS running Apache with a phpBB installation. One specific user has received errors when posting a message or logging in to the forum. The following issue has arisen in parallel to installing nginx, which serves only the static files of my site. Not sure if this is only coincidence. Furthermore, my setup uses redirects (in some cases, double-redirects) to point the user to a different virtual folder. So, the forum is seen to be at /translation/ but the actual files are found in /phpbb/. I'm at a loss as to what may be the underlying issue. My server? The person's ISP? She has tested both at home and at work, with similar issues. While trying to process the request: GET /phpbb/index.php?sid=f62c927e7eb8f1d60a92dcc6fd918112 HTTP/1.1 Host: www.irishgaelictranslator.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-za Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://www.irishgaelictranslator.com/phpbb/ucp.php?mode=login Cookie: phpbb3_cipi4_u=96645; phpbb3_cipi4_k=; phpbb3_cipi4_sid=f62c927e7eb8f1d60a92dcc6fd918112; __utma=153470688.1232378553.1294664234.1294664234.1294664234.1; __utmb=153470688.9.10.1294664234; __utmc=153470688; __utmz=153470688.1294664235.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); style_cookie=null The following error was encountered: Invalid Response The HTTP Response message received from the contacted server could not be understood or was otherwise malformed. Please contact the site operator. Your cache administrator may be able to provide you with more details about the exact nature of the problem if needed.

    Read the article

  • Server getting overloaded

    - by taras
    Hi, I have 2 setups tomcat5.5.20 on Redhat and mysql 4.1.22 on another Redhat server. Recently my webserver started getting overloaded up to 80-90%. After checking i found repeating errors(each seconds) in catalina.out. Can it cause the server overload or where else can be the root of the problem ? catalina.out: DBCP object created 2010-12-22 13:33:12 by the following code was never closed: java.lang.Exception at org.apache.tomcat.dbcp.dbcp.AbandonedTrace.init(AbandonedTrace.java:96) i have to restart tomcat once a day when server load reaches 80-90%. Also catalina.out file is growing too fast which every few hours need to clear the logs. My datasource config: <bean id="myDataSource" class="org.apache.tomcat.dbcp.dbcp.BasicDataSource" destroy-method="close"> <property name="driverClassName"> <value>com.mysql.jdbc.Driver</value> </property> jdbc:mysql://XXX/XXX?autoReconnect=true 20 20 <property name="maxIdle"> <value>50</value> </property> <property name="maxActive"> <value>50</value> </property> <property name="removeAbandoned"> <value>false</value> </property> <property name="removeAbandonedTimeout"> <value>2400</value> </property> <property name="username"> <value>XXX</value> </property> <property name="password"> <value>XXX</value> </property> </bean> Thanks for any direction.

    Read the article

  • GIT : I keep having to merge my new branch

    - by mnml
    Hi, I have created a new branch and I'm working on it with others dev but for reasons when I want to push my new commits I always have to git merge origin/mynewbranch Otherwise I'm getting some errors: ! [rejected] mynewbranch -> mynewbranch (non-fast-forward) error: failed to push some refs to '[email protected]/repo.git' To prevent you from losing history, non-fast-forward updates were rejected Merge the remote changes before pushing again. See the 'Note about fast-forwards' section of 'git push --help' for details. You asked me to pull without telling me which branch you want to merge with, and 'branch.mynewbranch.merge' in your configuration file does not tell me, either. Please specify which branch you want to use on the command line and try again (e.g. 'git pull <repository> <refspec>'). See git-pull(1) for details. If you often merge with the same branch, you may want to use something like the following in your configuration file: [branch "mynewbranch"] remote = <nickname> merge = <remote-ref> [remote "<nickname>"] url = <url> fetch = <refspec> See git-config(1) for details. Why is it not automatic? Thanks

    Read the article

  • Folder redirection GPO doesn't seem to be working

    - by user57999
    I've been trying to set up roaming profiles and folder redirection, but have hit a bit of a snag with the latter. This is exactly what I've done so far: (I have OU permissions and GPO permissions over my division's OU.) Created a group called Roaming-Users in the OU 'Groups' Added a single user (testuser) to the group Using the Group Policy Management tool (via RSAT on Windows 7) I right-clicked on the Groups OU and selected 'Create a GPO in this domain, and Link it here' Added my 'Roaming-Users' group to the Security Filtering section of the policy. Added the Folder Redirection option, specifically for Documents. It is set to redirect to: \myserver\Homes$\%USERNAME%\Documents (Homes$ exists and is sharing-enabled). Right-clicked on the policy under the Groups OU and checked Enforced. Logged into a machine as testuser successfully. Created a simple text file, saved some gibberish, logged off. Remoted into the server with Homes$ on it, noticed that the directory Homes$\testuser was created, but was empty. No text file to be found. From what I've read, I did everything I aught to...but I can't quite figure out the issue. I had no errors when I logged off about syncing issues (offline files is enabled) or anything, so I can only imagine my file should have ended up up on the share. Any ideas? EDIT: Using gpresult /R, I confirmed the user is in fact part of the Roaming-Users group, but does not have the policy applied, if that helps. EDIT 2: Apparently you can't apply GPOs to groups...so I applied to users and used the same security filter to limit it to my test user. Nothing happens as far as redirection goes, but I now have the following error in the event log: Folder redirection policy application has been delayed until the next logon because the group policy logon optimization is in effect

    Read the article

  • Should tripwire be entering /proc?

    - by dsadinoff
    When initializing the db with tripwire --init it spat out a bunch of errors pertaining to /proc: ### Warning: File system error. ### Filename: /proc/16982/fd/4 ### No such file or directory ### Continuing... ### Warning: File system error. ### Filename: /proc/16982/fdinfo/4 ### No such file or directory ### Continuing... ### Warning: File system error. ### Filename: /proc/16982/task/16982/fd/4 ### No such file or directory ### Continuing... ### Warning: File system error. ### Filename: /proc/16982/task/16982/fdinfo/4 ### No such file or directory ### Continuing... ### Warning: Duplicate object encountered. ### /proc/sys/net/ipv6/neigh This feels like noise. The twpol.txt file has the following clause: # # Critical devices # ( rulename = "Devices & Kernel information", severity = $(SIG_HI), ) { /dev -> $(Device) ; /proc -> $(Device) ; } Which, if I understand it right, is going to cause tripwire to care deeply about the entire contents of /proc. Shouldn't it just care about the static parts of /proc like the drivers and such, and not the per-pid stuff? Why does it ship like this?

    Read the article

  • How do I uncompress vmlinuz to vmlinux?

    - by Lord Loh.
    I have already tried uncompress, gzip, and all other solutions that come up as google results and these have not worked for me. To get just the image search for the GZ signature - 1f 8b 08 00. > od -A d -t x1 vmlinuz | grep '1f 8b 08 00' 0024576 24 26 27 00 ae 21 16 00 1f 8b 08 00 7f 2f 6b 45 so the image begins at 24576+8 => 24584. Then just copy the image from the point and decompress it - > dd if=vmlinuz bs=1 skip=24584 | zcat > vmlinux 1450414+0 records in 1450414+0 records out 1450414 bytes (1.5 MB) copied, 6.78127 s, 214 kB/s Got these instructions verbatim from a forum online: http://www.codeguru.com/forum/showthread.php?t=415186 This process does not work for me and end up giving errors that states file not found 0024576 and all subsequent numbers. How do I proceed extracting vmlinux from vmlinuz? Thank you. EDITED: This is a reverse engineering question. I have no access to the distro to install any RPM or recompile. I start with nothing but vmlinuz.

    Read the article

  • Archive Manager, SQL 2005 and MaxTokenSize high CPU

    - by Tim Alexander
    So, I posted this question a few days ago: Impact of increasing the MaxTokenSize for Kerberos Tickets Since then the thought was to test our settings on two member servers, one with IIS and one without. I setup two GPOs to configure the MaxTokenSize reg setting to 48000 and MaxFieldLength/MaxRequestBytes to 64200 (based on MS KB2020943, these are set at 4/3 * T + 200). The member server seemed to work ok (a devalued tape backup server). The IIS server however has had some strange repercussions. The IIS Sserver host Quest Software Archive Manager (AM) 4.5 that communicates with SQL Server 2005 Enterprise on Server 2003 R2. After the changes all looked good until the SQL Server hit 100% CPU. I have removed the GPOS, removed the reg values and even replaced them with defaults (12000 for token size and can't remember the other one but was in a blog post about the issue in my other post). No change. Bouncing the IIS Server stops the high CPU and a colleague has looked at the SQL server and it is definitely the AM connection taking up the time/work on the SQL server. I haven't changed the reg values on the SQL server or the DCs but am reluctant to do so without understanding why this has happened. I am guessing its to do with the overriding auth and group issue we have but I am not seeing Kerberos errors in either event log. Has anyone seen something similar or does anyone have some tips? Was definitely blindsided by the Kerberos issue and am swimming against the tide to keep things functioning.

    Read the article

  • Oracle: Getting ORA-01195 and ORA-01110 when attempting resetlogs

    - by MacAnthony
    I am trying to get our database to startup. When I login to sqlplus and do a startup, I get the message: Total System Global Area 534462464 bytes Fixed Size 2215064 bytes Variable Size 331350888 bytes Database Buffers 192937984 bytes Redo Buffers 7958528 bytes Database mounted. ORA-01589: must use RESETLOGS or NORESETLOGS option for database open So I do a shutdown, startup mount (which works fine) and then run: SQL> alter database recover using backup controlfile until cancel; alter database recover using backup controlfile until cancel * ERROR at line 1: ORA-00283: recovery session canceled due to errors ORA-19909: datafile 1 belongs to an orphan incarnation ORA-01110: data file 1: '/<path>/system01.dbf' SQL> alter database open resetlogs; alter database open resetlogs * ERROR at line 1: ORA-01195: online backup of file 1 needs more recovery to be consistent ORA-01110: data file 1: '/<path>/system01.dbf' I know I've used instructions to get me past this error before, but I seem to be having trouble tracking it down. A bit of history: We wanted to refresh the data in this from another db so we attempted to do a expdb/impdb into this instance. The impdb did not complete correctly and got an end of file error message in it and hung (I still have the message in a log if it's important). Since the instance would start at this point, we decided to use the hotbackup process we have to restore the db. The hotbackups are from another server/instance. We went through the same process 2 weeks ago. At the point of recreating the control file is where we got to the issue above.

    Read the article

  • windows 95 virtualization and CPU idle

    - by brtlvrs
    We are in the situation that we need to migrate our windows 95 systems (i know, that is so last century ). There is no hardware for a reasonable price available to run our windows 95 systems. So we are extending the overdue lifetime of it, by making them virtual. Now we see that VMware reports 100% CPU utilization of a windows 95 VM. This is because windows 95 doesn't know how to manage a CPU. For this reason software like rain or Waterfall, or CPUCool are introduced. The they are sending a HLT instruction to the CPU. This causes the CPU to halt and wait for new triggers to work. I've tested the mentioned programs in the VM, but they don't work, but generate errors. anyone has a valid workaround, solution ?? btw. I know that the best solutiuon is to replace the windows95 with windows XP. But in our situation that will take at least 5 years. Our windows 95 systems are running factory process control software.....

    Read the article

  • Cannot Install Phusion Passenger 3.0.13 with Nginx 1.2.1

    - by LightBe Corp
    I installed gem Passenger which installed 3.0.13. Then I executed passenger-install-nginx-module which is what the Nginx instructions on http://www.modrails.com said to do. It installs the latest stable version which is 1.2.1 according to the Nginx official wiki page. I said to install Nginx to /usr/local/nginx (which is the default if you go to the nginx wiki website). I get the following errors: Undefined symbols for architecture x86_64: "_pcre_free_study", referenced from: _ngx_pcre_free_studies in ngx_regex.o ld: symbol(s) not found for architecture x86_64 collect2: ld returned 1 exit status make[1]: *** [objs/nginx] Error 1 make: *** [build] Error 2 -------------------------------------------- It looks like something went wrong Please read our Users guide for troubleshooting tips: /Users/server1/.rvm/gems/[email protected]/gems/passenger-3.0.13/doc/Users guide Nginx.html If that doesn't help, please use our support facilities at: http://www.modrails.com/ We'll do our best to help you. I have done searches for several hours trying to find a resolution. I tried the Google Group for Phusion Passenger but did not find anything. I do not know if there is a mismatch in version numbers or not. The documentation says nothing about this error.

    Read the article

  • Mavericks permission issues with Windows Server deduplicated shares

    - by dmohlmaster
    We have a number of 10.9-10.9.3 - Mavericks - machines installed throughout our facility. Much of the user content is pulled from shares stored on our Windows Server 2012 fileservers with deduplication enabled. I have found that files newly written or unoptimized are able to be accessed without issue - read, written, modified, etc. Once the file gets optomized/deduplicated and Windows adds the P & L attributes - sparse and symlink - the Macs running Mavericks begin to have access issues. Once the files get deduplicated, users begin receiving read access errors when copying files (see error1 below). This happens when copying to folders within the current folder tree or copying somewhere to the local system. If you 'stop' the copy operation and retry a few more times, it may eventually work for the specific instance but fail again later. I am however, able to copy these files without issue via the terminal. Other systems running 10.7 do not experience the same issues and are able to access file server resources without issue. Many of the systems having issues are newer and thus not able to be downgraded to 10.8 or 10.7. I have tried finder replacements such as Pathfinder but the results are the same. I know this is at least similar to the issues many Mac users are already experiencing and posting about but I haven't seen it directly linked to deduplication and the attributes written by Windows server. Has anyone seen this issue? Have any solutions been found? Error 1: When copying files after the PL attributes have been set by deduplication. "One or more items can't be copied to "Foler" because you don't have permissions to read them. ******************************************' Via the system.log, I am also seeing the following error when accessing these deduplicated file shares. The reparse point tag listed below is "IO_REPARSE_TAG_DEDUP" Reported error: "smbfs_nget: filename.ext - unknown reparse point tag 0x80000013"

    Read the article

  • Gathering buslogic SCSI hardware and virtual machine operating system

    - by Julian
    I'm trying to use Powershell to get SCSI hardware from several virtual servers and get the operating system of each specific server. I've managed to get the specific SCSI hardware that I want to find with my code, however I'm unable to figure out how to properly get the operating system of each of the servers. Also, I'm trying to send all the data that I find into a csv log file, however I'm unsure of how you can make a powershell script create multiple columns. Here is my code (almost works but something's wrong): $log = "C:\Users\me\Documents\Scripts\ScsiLog.csv" Get-VM | Foreach-Object { $vm = $_ Get-ScsiController -VM $vm | Where-Object { $_.Type -eq "VirtualBusLogic" } | Foreach-Object { get-VMGuest -VM $vm } | Foreach-Object{ Write-output $vm.Guest.VmName >> $log } } I don't receive any errors when I run this code however whenever I run it I'm only getting the name of the servers and not the OS. Also I'm not sure what I need to do to make the OS appear in a different column from the name of the server in the csv log that I'm creating. What do I need to change in my code to get the OS version of each virtual machine and output it in a different column in my csv log file? EDIT: Here's a more in depth look at things I've tried that have all failed: Get-VM | Foreach-Object { $vm = $_ $svm = Get-ScsiController -VM $vm | Where-Object { $_.Type -eq "VirtualBusLogic" } Foreach-Object {get-VMGuest -VM $svm } | Foreach-Object{Write-output $svm >> $log} } #Get-VM | Foreach-Object { # $vm = $_ # Get-ScsiController -VM $vm | Where-Object { $_.Type -eq "VirtualBusLogic"} #| write-host $vm # | Foreach-Object { # # #get-VMGuest -VM $_ | # #write-host $vm # #get-VMGuest -VM $vm } | Foreach-Object{ # #write-output $vm.VmName >> $log # #write-output $vm.guest.VmName, get-VmGuest -VM $vm >> $log NO GOOD # # Write-host $vm.Guest.VmName #+ get-vmGuest -vm $VM >> $log # # # } # } I'm not sure why get-VmGuest fails though. I'm getting the scsi hardware, filtering the hardware to only get buslogic, and then wanting to get the operating system of just the filtered VMs. I don't see where my code fails though.

    Read the article

  • Losing Windows Authentication intermittently

    - by Mark Robinson
    I'm running a website on IIS6 / Server 2003 which uses Integrated Windows Authentication on a local intranet. I can browse to the site but get intermittent "Object null" errors when calling the following C# code which is called on every request: .... GetUserIdFromPrincipal(User) .... public static string GetUserIdFromPrincipal(IPrincipal principal) { return principal.Identity is WindowsIdentity ? (principal.Identity as WindowsIdentity).User.Value : principal.Identity.Name; } (Apologies for the code sample but was torn between SF and SO but think this is more to do with server config). So as the error is intermittent clearly Windows Auth is working on some level but after navigating around the site for several clicks I get the null reference error meaning IPrincipal is null (I thought this should never be null in ASP.NET). The error only happens on this newly built VM. The code is fine on other machines. Does IIS request the Windows Auth details on each request? What would cause such an intermittent problem? Any help or suggestions would be much appreciated.

    Read the article

< Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >