Search Results

Search found 19256 results on 771 pages for 'boost log'.

Page 713/771 | < Previous Page | 709 710 711 712 713 714 715 716 717 718 719 720  | Next Page >

  • Recover strategy single bad sector in moricon

    - by Damon
    This week, my harddisk made me an early christmas present in the form of a single defect sector. To make up for the puny size of the present, it chose a sector inside moricons.dll for that. This means that now the system takes about 5 minutes to boot before Windows gives up and moves on, and there's 2 dozen scary "critical failure" entries in the system log after every boot, which is annoying. OK, admittedly, I shouldn't complain, it could be worse, the bad sector could be in ntldr... SMART info more or less indicates (for what SMART can indicate anyway) that the drive is mostly OK. Soft Read Error Rate has a score of 96, and Current Pending Sector Count has a raw value of 8, which translates to a score of 100. Acronis DriveMonitor makes this an issue (lowering the overall rating to 75%), HDD Health calls it "excellent", giving an overall rating of 95% (which is what this harddisk from day one). No single score is below 95 (power on hours and spin up count), and most are 100 anyway. Well, whatever, I've seen drives with perfect SMART values fail from one second to the other, and drives with moderate values work for years. So, I'm inclined not to put too much weight into that overall. TL;DR Now... to the problem: I don't feel like trashing the disk just yet (that's planned with a new OS install upgrading to Win7 early next year, independently of this issue), but in the mean time, I would still like to have a smoothly running system again. Therefore, I feel tempted to tamper with it, but before I render my system entirely unusable (since I've never done this before), I'd like to verify that my planned procedere is likely to suceed in having a working system again: Copy moricons.dl_ from the Windows install disk, rename it to moricons.zip, and unzip it. This gives an intact 5.1.2600.2180 version (the broken one is 5.1.2600.5512 - but I guess this makes not much of a difference, since it's an icon-only DLL, and an outdated copy should work better than one that can't be read) Run chkdsk /r /f` which will "repair" the file (i.e. delete the file without asking, tell the drive to remap the sector, and toss some unreadable junk into a file with a hexadecimal number) Hopefully Windows still boots after this (is that a reasonable expectation, or do I need to have something like BartPE ready? -- but then again, what's that good for in case chkdsk has nuked the entire file system...) Delete the junk file generated by chkdsk, copy the new DLL to %windir%\system32 Reboot. Pray. Maybe I just shouldn't touch anything, since it still kind of works... if annoying, but it works. Unsure... But, is there anything fundamentally wrong with the planned approach? Is this a sensible approach at all?

    Read the article

  • Clamdscan scans file in 0 seconds

    - by SupaCoco
    I have to run clamav on large files. I was wondering which command was the fastest between clamscan and clamdscan. But it seems that clamdscan is not working properly: it scans file larger than 1 GB. Could you guys help me find why the heck clamdscan isn't working ? Between clamscan and clamdscan which one is less resource consuming ? I run ClamAV 0.97.8/18037 on Ubuntu 12.04.3 LTS. Please find below the execution result of both commands: clamscan myfile.zip ----------- SCAN SUMMARY ----------- Known viruses: 2864504 Engine version: 0.97.8 Scanned directories: 0 Scanned files: 1 Infected files: 0 Data scanned: 0.00 MB Data read: 1024.16 MB (ratio 0.00:1) Time: 9.145 sec (0 m 9 s) clamdscan myfile.zip /home/ubuntu/workspace/benchmark/myfile.zip: OK ----------- SCAN SUMMARY ----------- Infected files: 0 Time: 0.000 sec (0 m 0 s) And here are the clamav log file: Wed Oct 30 10:26:32 2013 -> Received POLLIN|POLLHUP on fd 4 Wed Oct 30 10:26:32 2013 -> Got new connection, FD 9 Wed Oct 30 10:26:32 2013 -> Received POLLIN|POLLHUP on fd 5 Wed Oct 30 10:26:32 2013 -> fds_poll_recv: timeout after 5 seconds Wed Oct 30 10:26:32 2013 -> Received POLLIN|POLLHUP on fd 9 Wed Oct 30 10:26:32 2013 -> got command CONTSCAN /home/ubuntu/workspace/benchmark/myfile.zip (51, 7), argument: /home/ubuntu/workspace/benchmark/myfile.zip Wed Oct 30 10:26:32 2013 -> mode -> MODE_WAITREPLY Wed Oct 30 10:26:32 2013 -> Breaking command loop, mode is no longer MODE_COMMAND Wed Oct 30 10:26:32 2013 -> Consumed entire command Wed Oct 30 10:26:32 2013 -> Number of file descriptors polled: 1 fds Wed Oct 30 10:26:32 2013 -> fds_poll_recv: timeout after 3600 seconds Wed Oct 30 10:26:32 2013 -> THRMGR: queue (single) crossed low threshold -> signaling Wed Oct 30 10:26:32 2013 -> THRMGR: queue (bulk) crossed low threshold -> signaling Wed Oct 30 10:26:32 2013 -> /home/ubuntu/workspace/benchmark/myfile.zip: OK Wed Oct 30 10:26:32 2013 -> Finished scanthread Wed Oct 30 10:26:32 2013 -> Scanthread: connection shut down (FD 9) Wed Oct 30 10:26:32 2013 -> THRMGR: queue (single) crossed low threshold -> signaling Wed Oct 30 10:26:32 2013 -> THRMGR: queue (bulk) crossed low threshold -> signaling

    Read the article

  • PHP `virtual()` with Apache MultiViews not working after upgrade to Ubuntu 12.04

    - by Izzy
    I use PHP's virtual() directive quite a lot on one of my sites, including central elements. This worked fine for the last ~10 years -- but after upgrading (or rather moving, as it is on a new machine) to Ubuntu 12.04 it somehow got broken. Example setup (simplified) To make it easier to understand, I simplify some things (contents). So say I need a HTML fragment like <P>For further instructions, please look <A HREF='foobar'>here</P> in multiple pages. 10 years ago, I used SSI for that, so it is put into a file in a central place -- so if e.g. the targeted URL changes, I only need to update it in one place. To serve multiple languages, I have Apache's MultiViews enabled -- and at $DOCUMENT_ROOT/central/ there are the files: foobar.html (English variant, and the default) foobar.html.de (German variant). Now in the PHP code, I simply placed: <? virtual("/central/foobar"); ?> and let Apache take care to deliver the correct language variant. The problem As said, this worked fine for about 10 years: German visitors got the German variant, all others the English (depending on their preferred language). But after upgrading to Ubuntu 12.04, it no longer worked: Either nothing was delivered from the virtual() command, or (in connection with framesets) it even ended up in binary gibberish. Trying to figure out what happens, I played with a lot of things. I first thought MultiViews was (somehow) not available anymore -- but calling http://<server>/central/foobar showed the right variant, depending on the configured language preferences. This also proved there was nothing wrong with file permissions. The error.log gave no clues either (no error message thrown). Finally, just as a "last ressort", I changed the PHP command to <? virtual("central/foobar.html"); ?> -- and that very same file was in fact included. So PHP's virtual() function basically worked -- but the language dependend stuff obviously did no longer work together with it as it did before. Of course I tried to find some change (most likely in PHP's virtual() command), using Google a lot, and also searching the questions here -- unfortunately to no avail. Finally: The question Putting "design questions" aside (surely today I would design things differently -- but at least currently I miss the time to change that for a quite huge amount of pages): What can be done to make it work again? I surely missed something -- but I cannot figure out what...

    Read the article

  • Model M Keyboard inputs incorrect characters after logging in to Fedora

    - by mickburkejnr
    I recently bought a 24 year old IBM Model M keyboard. From what I gather, it'd been left on a shelf for the last 5 years, so you can imagine the amount of dust dirt and crap that was on it. Before cleaning it, I plugged it in to my laptop (running Fedora 17) using a PS/2 to USB adapter. What I found was, while it still works, the keys I press don't correspond to what is displayed on the screen. So for example, when I type S on the keyboard, I get ß display on the screen instead. At the time, I put this down to the adapter not working properly. Since then, I stripped the keys off the keyboard and cleaned the whole thing. It looks like it's just come out of a box! I then plugged it in to my computer (also running Fedora 17) via a standard PS/2 plug. The computer loaded up to the login screen, and I typed in my password. Pressed enter, and I logged straight in to my machine. At this point, I opened up a text editor and started typing some stuff. To my horror, the keystrokes I was entering weren't coming up as intended. What came up instead were characters that would map to the pressed key but only under a different keyboard language setting. I opened up a program to see what keyboard language had been selected, and the correct one for the keyboard was selected (which is UK in my case). I opened up a window that would show what characters mapped to what keys, and I pressed every single key on the keyboard, and every corresponding block representing each key lit up. I went back to the text editor to try again, but I was still getting these random characters. Whats more is that the backspace key would not work, although in the other utility it would flash when pressed. What I know is that at the login screen the keyboard must have entered the correct characters, otherwise I wouldn't have been able to log in. Further more, keys that don't respond while using a text editor as sending signals to the computer, as illustrated in that keyboard utility. The question is why random characters are displayed when they really shouldn't be? Would this be a hardware fault or a software issue?

    Read the article

  • arp "who-has tell" on cloned machine

    - by mcmorry
    I have a urgent problem to solve today, but I'm lost. Please help. I've cloned a Virtual Machine hosted on VM Ware ESXi 4.1 The OS is now Ubuntu Server 12.04 LTS, but at the time of cloning it was 10.04 LTS. I fixed the MAC address manually inside /etc/udev/rules.d/70-persistent-net.rules. It is a known problem on Ubuntu. I had to remove the old MAC address and set the new one as eth0. Everything seems to work fine, except ARP. My provider OVH sent me a warning to resolve it today (this is the second day) or they will block my IP! The log contains many lines like this: Tue Jun 5 01:04:29 2012 : arp who-has 178.32.136.212 tell 178.32.136.224 where .224 is the cloned server that is causing problems, and .212 is the cloned one. arp -na returns: ? (178.33.230.254) at 00:07:b4:00:00:02 [ether] on eth0 ? (178.32.136.212) at 00:50:56:09:8e:f1 [ether] on eth0 The first IP is the ESXi machine. The second one should not be there. I'm not an expert and I don't know what else to do to fix this problem. Any help will be very appreciated. Thanks. EDIT: ifcofig on .224: eth0 Link encap:Ethernet HWaddr 00:50:56:01:32:c6 inet addr:178.32.136.224 Bcast:178.32.136.255 Mask:255.255.255.0 inet6 addr: fe80::250:56ff:fe01:32c6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:399924 errors:0 dropped:465 overruns:0 frame:0 TX packets:241884 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:58006071 (58.0 MB) TX bytes:663603166 (663.6 MB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:516216 errors:0 dropped:0 overruns:0 frame:0 TX packets:516216 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:236284275 (236.2 MB) TX bytes:236284275 (236.2 MB) ifconfig on .212: eth0 Link encap:Ethernet HWaddr 00:50:56:09:8e:f1 inet addr:178.32.136.212 Bcast:178.32.136.255 Mask:255.255.255.0 inet6 addr: fe80::250:56ff:fe09:8ef1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:16014 errors:0 dropped:0 overruns:0 frame:0 TX packets:14511 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:15134444 (15.1 MB) TX bytes:2683025 (2.6 MB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:9944 errors:0 dropped:0 overruns:0 frame:0 TX packets:9944 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1139347 (1.1 MB) TX bytes:1139347 (1.1 MB)

    Read the article

  • SMBfs mounting OK, listing OK, Read KO, smbclient OK

    - by Kwaio
    I've tried to make the title the most meaningfull I could but it still looks ugly. The premises. We are using RHEL3-U8 as OS on most servers here, don't ask me why or suggest to upgrade, it's not on today's schedule. That means kernel used is 2.4.21 I have no access to the remote server, but I know it is a netApp NAS rack. $> smbclient --version Version 3.0.9-1.3E.9 Here is the /etc/fstab line : //NASHOSTNAME/share /mnt/mydir smbfs ro,uid=123,gid=123,workgroup=XXXX,credentials=/somefile 0 0 Here is the following mount output line //NASHOSTNAME/share on /mnt/mydir type smbfs (0) The symptoms. I can list the share without problems, even cd in there. The issue appears if I try to read any file : $> cat /mnt/mydir/fileX.txt cat: /mnt/mydir/fileX.txt: Input/output error In the system logs (/var/log/kernel for example) the following errors appear. Jul 30 15:40:02 hostname kernel: smb_errno: class ERRHRD, code 31 from command 0x2 Jul 30 15:40:02 hostname kernel: smb_errno: class ERRHRD, code 31 from command 0x2 Jul 30 15:40:02 hostname kernel: smb_open: fileX.txt open failed, result=-5 Jul 30 15:40:02 hostname kernel: smb_errno: class ERRHRD, code 31 from command 0x2 Jul 30 15:40:02 hostname kernel: smb_errno: class ERRHRD, code 31 from command 0x2 Jul 30 15:40:02 hostname kernel: smb_open: fileX.txt open failed, result=-5 Jul 30 15:40:02 hostname kernel: smb_readpage_sync: fileX.txt open failed, error=-5 The ERRHRD code 0x001F error is "General hardware failure" although it seems samba sometimes uses it for a different purpose, see http://www.ubiqx.org/cifs/SMB.html [Strange behaviour Alert] Additionnal informations : There is another SMB mountpoint on the system pointing to a (linux) host using samba and this one works. What I have tried. I have tried adding debug=4 to the mounting options and remounting the share and the logs still look the same. I have tried to mount the share with smbclient and I am able to fetch files with the get command. Both targets are in the same subnet, so network problem should be out, even if the LAN goes through a VPN with optimizers, MTU has already been decreased to 1450. I can also mount the share through NFS but then the files are all root.root 700 and I need to read them with another user...

    Read the article

  • Make mod_wsgi use python2.7.2 instead of python2.6?

    - by guron
    i am running Ubuntu 10.04.1 LTS and it came pre-packed with python2.6 but i need to replace it with python2.7.2. (The reason is simple, 2.7 has a lot of features backported from 3 ) i had installed python2.7.2 using ./configure make make altinstall the altinstall option installed it, without touching the system default version, to /usr/local/lib/python2.7 and placed the interpreter in /usr/local/bin/python2.7 Then to help mod_wsgi find python2.7 i added the following to /etc/apache2/sites-available/wsgisite WSGIPythonHome /usr/local i start apache and run a test wsgi app BUT i am greeted by python 2.6.5 and not Python2.7 Later i replaced the default python simlink to point to python 2.7 ln -f /usr/local/bin/python2.7 /usr/bin/python Now typing 'python' on the console opens python2.7 but somehow mod_wsgi still picks up python2.6 Next i tried, PATH=/usr/local/bin:$PATH export PATH then do a quick restart apache, but yet again its python2.6 !! Here is my $PATH /usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games contents of /etc/apache2/sites-available/wsgisite WSGIPythonHome /usr/local <VirtualHost *:80> ServerName wsgitest.local DocumentRoot /home/wwwhost/pydocs/wsgi <Directory /home/wwwhost/pydocs/wsgi> Order allow,deny Allow from all </Directory> WSGIScriptAlias / /home/wwwhost/pydocs/wsgi/app.wsgi </VirtualHost> app.wsgi import sys def application(environ, start_response): status = '200 OK' output = sys.version response_headers = [('Content-type', 'text/plain'), ('Content-Length', str(len(output)))] start_response(status, response_headers) return [output] Apache error.log 'import site' failed; use -v for traceback [Sun Jun 19 00:27:21 2011] [info] mod_wsgi (pid=23235): Initializing Python. [Sun Jun 19 00:27:21 2011] [notice] Apache/2.2.14 (Ubuntu) mod_wsgi/2.8 Python/2.6.5 configured -- resuming normal operations [Sun Jun 19 00:27:21 2011] [info] Server built: Nov 18 2010 21:20:56 [Sun Jun 19 00:27:21 2011] [info] mod_wsgi (pid=23238): Attach interpreter ''. [Sun Jun 19 00:27:21 2011] [info] mod_wsgi (pid=23239): Attach interpreter ''. [Sun Jun 19 00:27:31 2011] [info] mod_wsgi (pid=23238): Create interpreter 'wsgitest.local|'. [Sun Jun 19 00:27:31 2011] [info] [client 192.168.1.205] mod_wsgi (pid=23238, process='', application='wsgitest.local|'): Loading WSGI script '/home/wwwhost/pydocs/$ [Sun Jun 19 00:27:50 2011] [info] mod_wsgi (pid=23239): Create interpreter 'wsgitest.local|'. Has anybody ever managed to make mod_wsgi run on a non-system default version of python ?

    Read the article

  • Complete machine freezes...at a loss

    - by user28818
    Guys, We built around 12 machines a few months ago to run Ubuntu. They each have the following specs: ASUS Z8NA-D6 motherboard Dual quad core Intel(R) Xeon(R) CPU E5520 @ 2.27GHz OCZ Mod Extreme Pro 500W power supply 12 GB Kingston RAM Nvidia GeForce 9800 GT graphics card My machine ran well for awhile. However, it started experiencing random lockups. These lockups are not X lockups, they are complete system freezes. The nic stops responding, the magic sysrq buttons won't work. The machine is dead. I first suspected RAM. Memtest86 didn't find anything, but I replaced the RAM anyway. Still, lockups. So I replaced the graphics card. Still, more lockups. They became more and more frequent and started to happen 2-3 times a day. So I replaced the motherboard and power supply in one fell swoop. Suddenly, no more lockups! Woohoo! Except, a week later, in the morning, the machine wouldn't wake up. I reset it, started it up, and the log files showed the last entry at around 11 pm the evening before. This has started occurring with more frequency...now just about every morning I come in, the machine is locked up, and has been since the night before. Yesterday, in the 3 weeks since I replaced the motherboard and power supply, the machine actually locked up on in in mid-work. This is the first time since replacing the two (MB and PS) that this happened while I was using it. All others occurred while I was away. I'm at a loss. Nothing is in syslog or message that would indicate a problem around the time of the lockup. Temps are good...I use lmsensors to monitor and have a script that writes the output to file every minute. They never get that high. The only thing I haven't replaced at this point is the case and the harddrives. I doubt either could be the cause. What would you do if you were in my shoes? Is there a troubleshooting approach I'm missing? For the record, all of the other machines, all eleven of them, don't have any problems. They're all running the same version of Ubuntu (Lucid) that I am. Thanks!

    Read the article

  • How to configure sendmail to relay through a specific server

    - by ErebusBat
    I have a tiny home server setup behind my cable modem (bresnan communications). I want to be able for this box to send out email (not receive) for notifications and whatnot. What I have already done: I have installed and configured sendmail. I have added mail.bresnan.net as my SMART_HOST directive. What I belive the problem is When I attempt to send an email I get the following in my mail log: Dec 22 10:24:17 batcave sendmail[1530]: oBMHOHrs001530: from=aburns, size=140, class=0, nrcpts=1, msgid=<[email protected]>, relay=aburns@localhost Dec 22 10:24:17 batcave sm-mta[1531]: oBMHOHWZ001531: from=<[email protected]>, size=397, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA-v4, relay=localhost [127.0.0.1] Dec 22 10:24:17 batcave sendmail[1530]: oBMHOHrs001530: to=<[email protected]>, ctladdr=aburns (1000/1000), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30140, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (oBMHOHWZ001531 Message accepted for delivery) Dec 22 10:24:18 batcave sm-mta[1517]: oBMH9mVv001357: to=<[email protected]>, ctladdr=<[email protected]> (1000/1000), delay=00:14:30, xdelay=00:00:42, mailer=relay, pri=300339, relay=pmx0.bresnan.net. [69.145.248.1], dsn=4.0.0, stat=Deferred: Connection timed out with pmx0.bresnan.net. You can see where the message is accepted for delivery by my sendmail server, then where it attempts to hand off to bresnan's server and it timesout. This is where my question is. Asstute readers will notice that pmx0.bresnan.net is not what I have my SMART_HOST directive set as. This is the (outside?) MX server for the bresnan.com/net domain. Apparently bresnan has their network configured so that you can not access this server from within their own network and instead must use the mail.bresnan.net server (which I can connect to). The problem is that I don't know how to tell sendmail to use this server and not the domain. What I have tried Setting a hosts entry so that the pmx0 server points to the mail IP address. This doesn't work, which makes sense as sendmail is obviously doing an MX query to find the server which returns the IP so there is never a need to do a 'normal' DNS resolve so the hosts file never gets involved.

    Read the article

  • Lots of dropped packages when tcpdumping on busy interface

    - by Frands Hansen
    My challenge I need to do tcpdumping of a lot of data - actually from 2 interfaces left in promiscuous mode that are able to see a lot of traffic. To sum it up Log all traffic in promiscuous mode from 2 interfaces Those interfaces are not assigned an IP address pcap files must be rotated per ~1G When 10 TB of files are stored, start truncating the oldest What I currently do Right now I use tcpdump like this: tcpdump -n -C 1000 -z /data/compress.sh -i any -w /data/livedump/capture.pcap $FILTER The $FILTER contains src/dst filters so that I can use -i any. The reason for this is, that I have two interfaces and I would like to run the dump in a single thread rather than two. compress.sh takes care of assigning tar to another CPU core, compress the data, give it a reasonable filename and move it to an archive location. I cannot specify two interfaces, thus I have chosen to use filters and dump from any interface. Right now, I do not do any housekeeping, but I plan on monitoring disk and when I have 100G left I will start wiping the oldest files - this should be fine. And now; my problem I see dropped packets. This is from a dump that has been running for a few hours and collected roughly 250 gigs of pcap files: 430083369 packets captured 430115470 packets received by filter 32057 packets dropped by kernel <-- This is my concern How can I avoid so many packets being dropped? These things I did already try or look at Changed the value of /proc/sys/net/core/rmem_max and /proc/sys/net/core/rmem_default which did indeed help - actually it took care of just around half of the dropped packets. I have also looked at gulp - the problem with gulp is, that it does not support multiple interfaces in one process and it gets angry if the interface does not have an IP address. Unfortunately, that is a deal breaker in my case. Next problem is, that when the traffic flows though a pipe, I cannot get the automatic rotation going. Getting one huge 10 TB file is not very efficient and I don't have a machine with 10TB+ RAM that I can run wireshark on, so that's out. Do you have any suggestions? Maybe even a better way of doing my traffic dump altogether.

    Read the article

  • SQL Server suddenly using only a small portion of CPU.

    - by hermiod
    We've got a Windows 2008 R2 server running SQL Server 2008. All of a sudden, the SQLServer process is refusing to go above 20% CPU usage. As of last week, when running a heavy query against the db it would rise to 100% usage as I would expect. We've had this server for a while and it seems strange that it would just suddenly have this limit. This limit is causing our queries to take a lot longer than they normally would. No one has (knowingly at least) made any changes to the server configuration. After a bit of investigation, I discovered the sys.dm_os_sys_memory view. This shows 'available physical memory is high' bu at the same time the available physical memory is 339552kb where as the total is 4193848kb. It is worth noting that this is a virtual server running on vmware. Is there a setting somewhere with in SQL Server that sets the maximum CPU usage? I've found the settings in resource governor, although this is currently off as it always has been. We have recently started using Spotlight for SQL Server by Quest Software. It's playback database was located on this server for a short time this morning, I first noticed the problem shortly afterwards, although I hadn't been doing any queries prior to this so I don't know if this is the point at which the problem began, however the database was working as expected on Friday afternoon. The Windows log shows that the following settings were applied to the SpotlightPlaybackDatabase when it was created. 02/21/2011 08:45:02,spid60,Unknown,Setting database option TORN_PAGE_DETECTION to ON for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option MULTI_USER to ON for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option READ_WRITE to ON for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option AUTO_UPDATE_STATISTICS to ON for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option AUTO_CREATE_STATISTICS to ON for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option ANSI_WARNINGS to OFF for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option CONCAT_NULL_YIELDS_NULL to ON for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option RECOVERY to SIMPLE for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option QUOTED_IDENTIFIER to OFF for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option AUTO_CLOSE to OFF for database SpotlightPlaybackDatabase. Could any of these settings changes modified the settings applied to the whole server?

    Read the article

  • Hyper-V Ubuntu Networking Problems Copying Large Amounts of Data

    - by Anonymous
    I am trying to copy a large amount (about 50 GB) of data over my network from a Hyper-V-hosted virtual machine running Ubuntu 11.04 (Natty Narwhal) to another (non-virtual) Ubuntu host that I plan to use for testing upgrades to one of our web applications. The problem I am having is with the virtual machine, which I shall refer to in what follows as "source.host". This machine is running 64-bit Ubuntu Server with the 2.6.38-8-server kernel and the Microsoft Linux Integration Components for Hyper-V kernel modules (hv_utils, hv_timesource, hv_netvsc, hv_blkvsc, hv_storvsc, and hv_vmbus) loaded. It uses a Hyper-V "synthetic network adapter" for its networking interface. To do the copy, I log on to the machine with the data and run the following commands (Call the remote machine "destination.host".): $ cd /path/to/data $ tar -cvf - datafolder/ | ssh [email protected] "cat > ~/data.tar" This runs for a while and then suddenly stops after transferring somewhere from 2-6 GB. The terminal on the source.host machine displays a Write failed: broken pipe error. The odd part is this: after this occurs, the "source.host" machine is no longer able to talk to the rest of the network. I cannot ping any other hosts on the network from the "source.host" machine, and I cannot ping the "source.host" machine from any other host on the network. I am equally unable to access the any of the web services hosted on "source.host". Running ifconfig on "source.host" shows the network adapter to be up and running as usual with the correct IP address and everything. I tried restarting the networking service with $ /etc/init.d/networking restart but the problem does not go away. Restarting the machine makes it capable of talking to the network again -- it can ping and be pinged by other hosts, and the web services are also accessible and usable as normal -- but attempting the copy operation again results in the same failure, requiring another restart. As an experiment, I tried replacing the tar -- ssh pipeline above with a straight scp: $ scp -r datafolder/ [email protected]:~ but to no avail Thinking that the issue might have to do with the kernel packet-send buffers filling up, I tried increasing the buffer size to 12 MB (up from the 128 KB default) with # echo 12582911 > /proc/sys/net/core/wmem_max but this also had no effect. I'm guessing at this point that it might be a problem with the Microsoft synthetic network driver, but I don't really know. Does anyone have any suggestions? Thank you very much in advance!

    Read the article

  • .htaccess ignored, SPECIFIC to EC2 - not the usual suspects

    - by tedneigerux
    I run 8-10 EC2 based web servers, so my experience is many hours, but is limited to CentOS; specifically Amazon's distribution. I'm installing Apache using yum, so therefore getting Amazon's default compilation of Apache. I want to implement canonical redirects from non-www (bare/root) domain to www.domain.com for SEO using mod_rewrite BUT MY .htaccess FILE IS CONSISTENTLY IGNORED. My troubleshooting steps (outlined below) lead me to believe it's something specific to Amazon's build of Apache. TEST CASE Launch a EC2 Instance, e.g. Amazon Linux AMI 2013.03.1 SSH to the Server Run the commands: $ sudo yum install httpd $ sudo apachectl start $ sudo vi /etc/httpd/conf/httpd.conf $ sudo apachectl restart $ sudo vi /var/www/html/.htaccess In httpd.conf I changed the following, in the DOCROOT section / scope: AllowOverride All In .htaccess, added: (EDIT, I added RewriteEngine On later) RewriteCond %{HTTP_HOST} ^domain\.com$ [NC] RewriteRule ^/(.*) http://www.domain.com/$1 [R=301,L] Permissions on .htaccess are correct, AFAI can tell: $ ls -al /var/www/html/.htaccess -rwxrwxr-x 1 git apache 142 Jun 18 22:58 /var/www/html/.htaccess Other info: $ httpd -v Server version: Apache/2.2.24 (Unix) Server built: May 20 2013 21:12:45 $ httpd -M Loaded Modules: core_module (static) ... rewrite_module (shared) ... version_module (shared) Syntax OK EXPECTED BEHAVIOR $ curl -I domain.com HTTP/1.1 301 Moved Permanently Date: Wed, 19 Jun 2013 12:36:22 GMT Server: Apache/2.2.24 (Amazon) Location: http://www.domain.com/ Connection: close Content-Type: text/html; charset=UTF-8 ACTUAL BEHAVIOR $ curl -I domain.com HTTP/1.1 200 OK Date: Wed, 19 Jun 2013 12:34:10 GMT Server: Apache/2.2.24 (Amazon) Connection: close Content-Type: text/html; charset=UTF-8 TROUBLESHOOTING STEPS In .htaccess, added: BLAH BLAH BLAH ERROR RewriteCond %{HTTP_HOST} ^domain\.com$ [NC] RewriteRule ^/(.*) http://www.domain.com/$1 [R=301,L] My server threw an error 500, so I knew the .htaccess file was processed. As expected, it created an Error log entry: [Wed Jun 19 02:24:19 2013] [alert] [client XXX.XXX.XXX.XXX] /var/www/html/.htaccess: Invalid command 'BLAH BLAH BLAH ERROR', perhaps misspelled or defined by a module not included in the server configuration Since I have root access on the server, I then tried moving my rewrite rule directly to the httpd.conf file. THIS WORKED. This tells us several important things are working. $ curl -I domain.com HTTP/1.1 301 Moved Permanently Date: Wed, 19 Jun 2013 12:36:22 GMT Server: Apache/2.2.24 (Amazon) Location: http://www.domain.com/ Connection: close Content-Type: text/html; charset=UTF-8 HOWEVER, it is bothering me that it didn't work in the .htaccess file. And I have other use cases where I need it to work in .htaccess (e.g. an EC2 instance with named virtual hosts). Thank you in advance for your help.

    Read the article

  • Windows 8 disk errors

    - by wrongusername
    So yesterday, I forcibly restarted my Windows 8 PC. VMWare Workstation was having some trouble with the guest Linux Mint OS. It wasn't responding for some time, so I tried suspending it September 28th or perhaps even before. It wouldn't suspend -- I forgot what the window looked like, but all options in the power menu were disabled (i.e. "Shutdown," "Power Off," and options like that were all disabled). I eventually killed the VMWare application through Task Manager, though I was too lazy to hunt down the running virtual machine itself, and decided to kill it by just shutting down my PC entirely. The PC wouldn't shut down for quite some time after the monitor went blank, so I did a cold reset by holding the power button. I then powered it on again and Windows briefly gave me some message like "Search for KERNEL_STACK_INPAGE_ERROR." Windows then started diagnosing some problems and gave me the message, "Repairing disk errors. This might take over an hour to complete." That was yesterday night, and I went to sleep without waiting for it to finish. This morning, it said that the repair failed, and that the log was at C:\windows\system32\LogFiles\srt\srtTrail.txt (as I remember it -- I don't have the exact path I wrote down right now). It gave me some other options to troubleshoot, such as resetting Windows (files and settings still intact, but programs not installed through the app store will be erased). That didn't work (no error message given, I was just told it didn't work). I tried rebooting in safe mode, the same diagnosis process begins, except that this time it doesn't bother with the automatic repairs again. So I tried using the command prompt to try to see if my files are at least still there. I was on the X drive, and I couldn't cd to the C drive. I couldn't find my folder under Users (of course?), and couldn't find the srt folder under LogFiles either. I am not sure what to try next. I have backed up everything, but to the cloud, so if absolutely necessary I can start off with a fresh copy of Windows and restore all my data, though it would be a hassle. Any thoughts on what might be wrong or what I can try? My computer was purchased just this June, so the hard drive should still be pretty new.

    Read the article

  • Why am I unable to send an attachment with Outlook via SMTP that I am able to send via Gmail / Google Apps?

    - by cwd
    I have Google Apps installed and I have tried to set up Outlook 2007 to send messages via SMTP. I followed the guide, selecting what I believe are all the correct settings. Yes, I am using POP for incoming, that is intentional but I don't believe it should affect outgoing messages. When I log into gmail (google apps) for my company, I can send a message that has an 8MB attachment (pdf file, not zipped or anything) and it sends fine. However, when I send the same message in Outlook with that same 8mb attachment it fails. Why am I unable to send an attachment with Outlook via SMTP that I am able to send via Gmail / Google Apps? The message headers are (some info omitted for privacy): Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 552 552 #5.3.4 message size exceeds limit (state 17). ----- Original message ----- DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=company.com; s=google; h=from:to:cc:references:in-reply-to:subject:date:message-id :mime-version:content-type:x-mailer:thread-index:content-language; bh=7d4i/Cbt0v0sY3zt5lN6y5CdvxjbRmTBG4AuBuMxtF4=; b=IJwwxuIEdg1E4zXuGjeDod+1w3RYBBCNzSsqpuX77ih36HSiq++s3ZCQXPeU9CIZVg K8JPJQu9xjivYYjrRaYwyeowLIu0GIdR2h4kKEkFM/GNC2RFF3VwVgj+gvi5eqVZIuWn osT5/VEm10IED6B54NPOtGMgFTci6a57zzVKE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:references:in-reply-to:subject:date:message-id :mime-version:content-type:x-mailer:thread-index:content-language :x-gm-message-state; bh=7d4i/Cbt0v0sY3zt5lN6y5CdvxjbRmTBG4AuBuMxtF4=; b=LjTecjok5K71Bymp6tZqAL2XCz03hWROV1mTK8Vf2AeEJwtel9ACu9kE5jW5iJqckb upYKPzoqYLBwAPOzMb9asWoTAZPzC7LMG65eDUc2/ZEvGqXrZs3ziUxwhF4t169yRVuy /6nm/aAt5uPMLPdobxGTJ8ahOIku1Z3gW+OcvZ6ERk1Av/bvuln09vcnyJIrHGh7eK8n cbGVxmK0aecgSPgIj2NALbHkyuxwj+LEBRV6uiz3THDjxAiNfsO5UFjV59sD+lVSBT3z ThOGE8WEXRnKHuP3FuKXyeUxKBZ2CxpWJpvDuS9EsFkln7zkISYEsRA0nUA6GSGi2Z/n 8YUg== Received: by 10.60.169.197 with SMTP id ag5mr12254920oec.137.1351036287413; Tue, 23 Oct 2012 16:51:27 -0700 (PDT) References: Date: Tue, 23 Oct 2012 19:51:16 -0400 Message-ID: <003a01cdb179$4bb2ca60$e3185f20$@com> MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_NextPart_000_003B_01CDB157.C4A12A60" X-Mailer: Microsoft Office Outlook 12.0 Thread-Index: Ac2xVCHGxoC7DDOkQBK3JSXowHb0EQAEB7agAAA/YKAAAIGcQAAAngfQAABAAPAAAFe7gAAAadvw AALgvLA= Content-Language: en-us X-Gm-Message-State: ALoCoQniMq7Fnh+NlfoWjTJPvKWbkhEaftSaFo9ZVvtRpWufTmhlRDx1a9Jf+wmYcbRh896gygNr The company I am sending email to is a company that uses Google Apps for Teams. This is their apps admin login. Should I be worried about that message? My Settings On the Google apps side I have set my SPF record and set / verified my DKIM key. Here are my outlook settings: Why am I unable to send an attachment with Outlook via SMTP that I am able to send via Gmail / Google Apps?

    Read the article

  • Issue using a "used" SSD as a Windows 8.1 Boot Drive

    - by EpiGrad
    So, I'm something of a Mac person, but decided to take a stab at this whole "build yourself a PC" thing - right now, the thing is assembled, posts just fine, and can get to the BIOS. The problem is the drive I want to use - I intended to use a 80 GB Corsair SSD I've had sitting around as the boot drive, and a new Samsung SSD for games and the like. So I boot using a Windows 8.1 install USB stick, and if the Samsung drive is plugged in, it happily offers to install Windows on it. The Corsair drive though, it's flipped out - I reformatted it as a blank NTFS drive (it was HFS for Mac purposes) and the BIOS can't see it, nor can the Windows installer. What's wrong, and how do I fix it? The tools at my disposal are: The current ASUS BIOS that came with my motherboard (a Z87I-Deluxe), a Mac running the latest OS X which can also boot to Windows 7 if needed via either Parallels or Bootcamp. Update 1: Update: Based on a friend's suggestion to switch SATA ports, Windows 8.1's installer can now see the drive as Drive 0, Partition 1, a 83.8 GB "Primary" partition. But when I click it and hit "Next", I get the following error: "We couldn't create a new partition or locate an existing one. For more information, see the Setup log files" - not that it gives any clue how to access those. Update 2: Following a trail of Google suggestions, I ended up going into advanced tools and just reformatting the drive as follows: Start DISKPART. Type LIST DISK and identify your SSD disk number (from 0 to n disks). Type SELECT DISK <n> where <n> is your SSD disk number. Type CLEAN Type CREATE PARTITION PRIMARY Type ACTIVE Type FORMAT FS=NTFS QUICK Type ASSIGN Type EXIT twice (one to get out of DiskPart, the other to exit the command line tool) Per these instructions. This goes well enough, but now I can select the disk for installation, and I get a new error: "Windows 8 cannot be installed to this disk. The selected disk has an MBR partition table. On EFI systems, Windows can only be installed to GP disks." So, Googling that, I do the following: select disk 0 clean convert gpt exit ...and we might have fixed it. Windows is at least trying to install now.

    Read the article

  • VirtualBox Port Forward not working when Guest IP *IS* specified (while doc says opposite)

    - by Patrick
    Trying to port forward from host (Mac OS X) 127.0.0.1:8282 - guest (CentOS)'s 10.10.10.10:8080. Existing port forwards include 127.0.0.1:8181 and 9191 to guest without any IP specified (so whatever it gets through DHCP, as explained in the documentation). Here is how the non-working binding was added: VBoxManage modifyvm "VM name" --natpf1 "rule3,tcp,127.0.0.1,8282,10.10.10.10,8080" Here is how the working ones were added: VBoxManage modifyvm "VM name" --natpf1 "rule1,tcp,127.0.0.1,8181,,80" VBoxManage modifyvm "VM name" --natpf1 "rule2,tcp,127.0.0.1,9191,,9090" And by "non-working", I of course mean not listening (as a prerequisite to forwarding): $ lsof -Pi -n|grep Virtual|grep LISTEN VirtualBo 27050 user 21u IPv4 0x2bbdc68fd363175d 0t0 TCP 127.0.0.1:9191 (LISTEN) VirtualBo 27050 user 22u IPv4 0x2bbdc68fd0e0af75 0t0 TCP 127.0.0.1:8181 (LISTEN) There should be a similar line above but with 127.0.0.1:8282. Just to be clear, this port is listening perfectly fine on the guest itself. And when I remove the guest IP (i.e., clear the 10.10.10.10) the forward works fine, albeit to eth0 (not eth1 where I need it). I can tcpdump and watch the traffic flow back and forth. And yes, I've disabled iptables entirely while testing -- it's not getting blocked anywhere on the guest. As VirtualBox writes in their documentation, you are required to specify the guest IP if it's static (makes sense, no DHCP record it keeps): "If for some reason the guest uses a static assigned IP address not leased from the built-in DHCP server, it is required to specify the guest IP when registering the forwarding rule:". However, doing so (as I need to), seems to break the port forward with nary a report in any log file I can find. (I've reviewed everything in ~/Library/VirtualBox/). Other notes: While I used the above command to add the third rule, I've also verified it showed up correctly in GUI and then removed/re-added from there just to make sure). This forum link -- while very dated -- looks somewhat related in that a port forward to a static IP was not appearing (perhaps they think due to lack of gratuitous arp being sent for host to know IP is there/avail?). Anyway, what gives? Is this still buggy? Any suggestions? If not, easy enough workarounds? What's interesting is that this works perfectly fine on another user's Mac, however he's running a slightly older version (4.3.6 v. 4.3.12).

    Read the article

  • Laptops on Windows Domain sometimes have problems accessing internet when off-site

    - by FSUScoot
    Hi all-- We've had this problem for a long time. When users travel, sometimes they can't get internet access from a wired or wireless connection. Here are a couple examples: 1) A user goes to a hotel and tries to access the wireless in their room. They can connect to the access point. They open a web browser and they can't get re-directed to the hotel's login page. Because they can't log in, there's no internet access. 2) A user goes to another laboratory/university and tries to access the wired network. They connect, link is fine, PC gets IP from DHCP but no internet access. There's no login page to be re-directed to. It should just "work". What I've found is that it's a DNS issue. Because the computer is on a Windows Domain, it seems it MUST use our DNS servers. Even if you connect to an outside network and do an ipconfig /all, it looks like everything is ok. It'll even show their DNS servers listed in the config. The computer just won't use the other network's DNS server. I found a reg key that keeps our DNS servers listed and it seems that they take priority every time: HKLM\SOFTWARE\Policies\Microsoft\Windows NT\DNSClient All the values under that key are for our AD domain. NameServer and Searchlist never change. What I've found is if the user edits the NameServer string and puts the DNS server of the network they're on, everything works just fine. They get re-directed to the hotel's correct login page or their internet access starts working. It's only a problem if the network they're on blocks outside DNS or a hotel that uses an internal name in their front page redirection that only their DNS server knows about, i.e., not public. If the re-direct page starts with an IP, like 10.10.10.10, it'll work just fine. Obviously this isn't a fix for everyone. Most of my users are pretty knowledgeable so it’s easy for me to walk them through or send them a .reg file that they can edit and run. This problem isn't limited to Windows 7. It was like this with XP as well. It's not hardware related. The problem exists on both wired and wireless, Intel or Broadcom, laptops or desktops. Anyone else have this problem? Is there a GPO I can change that I missed? Got a good work-around for this? Thanks for any help!

    Read the article

  • Undelivered Mail Returned to Sender

    - by Alex
    When sending to [email protected] via PHP mail() function, I receive mails. When sending emails from external machines, I receive the following (e.g., sending from [email protected]. [mail.ru is Russian gmail]): This is the mail system at host fallback2.mail.ru. I'm sorry to have to inform you that your message could not be delivered to one or more recipients. It's attached below. For further assistance, please send mail to <postmaster> If you do so, please include this problem report. You can delete your own text from the attached returned message. The mail system <[email protected]>: lost connection with mail.mydomain.com[xxx.xxx.xxx.xxx] while receiving the initial server greeting Reporting-MTA: dns; fallback2.mail.ru X-mPOP-Fallback_MX-Queue-ID: D8C19F2411F1 X-mPOP-Fallback_MX-Sender: rfc822; [email protected] Arrival-Date: Tue, 29 Oct 2013 10:09:21 +0400 (MSK) Final-Recipient: rfc822; [email protected] Original-Recipient: rfc822;[email protected] Action: failed Status: 4.4.2 Diagnostic-Code: X-mPOP-Fallback_MX; lost connection with mail.tld.com[xxx.xxx.xxx.xxx] while receiving the initial server greeting Here is my postfix main.cf: command_directory = /usr/sbin daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix myhostname = mail.mydomain.com mydomain = mydomain.com myorigin = mydomain.com inet_interfaces = all inet_protocols = all unknown_local_recipient_reject_code = 550 in_flow_delay = 1s alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases mail_name = mydomain.com daemon debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5 sendmail_path = /usr/sbin/sendmail.postfix newaliases_path = /usr/bin/newaliases.postfix mailq_path = /usr/bin/mailq.postfix setgid_group = postdrop html_directory = no manpage_directory = /usr/share/man sample_directory = /usr/share/doc/postfix-2.6.6/samples readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES bounce_queue_lifetime = 4h maximal_queue_lifetime = 4h delay_warning_time = 1h strict_rfc821_envelopes = yes show_user_unknown_table_name = no allow_percent_hack = no swap_bangpath = no smtpd_delay_reject = yes smtpd_error_sleep_time = 20 smtpd_soft_error_limit = 1 smtpd_hard_error_limit = 3 smtpd_junk_command_limit = 2 mydestination = mydomain.com, localhost.localdomain, localhost smtpd_client_restrictions = permit_inet_interfaces smtpd_recipient_limit = 100 virtual_alias_domains = mydomain.com virtual_alias_maps = hash:/etc/postfix/virtual smtpd_sasl_type = dovecot smtpd_sasl_path = private/auth smtpd_sasl_auth_enable = yes smtpd_relay_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination Why emails from external server are not being delivered? Thank you! Update In a log, the following lines appear a lot of times Oct 30 10:48:29 mydomain postfix/smtpd[16216]: connect from fallback5.mail.ru[94.100.176.59] Oct 30 10:48:29 mydomain postfix/smtpd[16216]: warning: SASL: Connect to private/auth failed: Connection refused Oct 30 10:48:29 mydomain postfix/smtpd[16216]: fatal: no SASL authentication mechanisms It appears I have to configure SASL? I would understand if I would like to send emails from postfix, but why do I need it to receive emails?

    Read the article

  • What else can I do to secure my Linux server?

    - by eric01
    I want to put a web application on my Linux server: I will first explain to you what the web app will do and then I will tell you what I did so far to secure my brand new Linux system. The app will be a classified ads website (like gumtree.co.uk) where users can sell their items, upload images, send to and receive emails from the admin. It will use SSL for some pages. I will need SSH. So far, what I did to secure my stock Ubuntu (latest version) is the following: NOTE: I probably did some things that will prevent the application from doing all its tasks, so please let me know of that. My machine's sole purpose will be hosting the website. (I put numbers as bullet points so you can refer to them more easily) 1) Firewall I installed Uncomplicated Firewall. Deny IN & OUT by default Rules: Allow IN & OUT: HTTP, IMAP, POP3, SMTP, SSH, UDP port 53 (DNS), UDP port 123 (SNTP), SSL, port 443 (the ones I didn't allow were FTP, NFS, Samba, VNC, CUPS) When I install MySQL & Apache, I will open up Port 3306 IN & OUT. 2) Secure the partition in /etc/fstab, I added the following line at the end: tmpfs /dev/shm tmpfs defaults,rw 0 0 Then in console: mount -o remount /dev/shm 3) Secure the kernel In the file /etc/sysctl.conf, there are a few different filters to uncomment. I didn't know which one was relevant to web app hosting. Which one should I activate? They are the following: A) Turn on Source Address Verification in all interfaces to prevent spoofing attacks B) Uncomment the next line to enable packet forwarding for IPv4 C) Uncomment the next line to enable packet forwarding for IPv6 D) Do no accept ICMP redirects (we are not a router) E) Accept ICMP redirects only for gateways listed in our default gateway list F) Do not send ICMP redirects G) Do not accept IP source route packets (we are not a router) H) Log Martian Packets 4) Configure the passwd file Replace "sh" by "false" for all accounts except user account and root. I also did it for the account called sshd. I am not sure whether it will prevent SSH connection (which I want to use) or if it's something else. 5) Configure the shadow file In the console: passwd -l to lock all accounts except user account. 6) Install rkhunter and chkrootkit 7) Install Bum Disabled those services: "High performance mail server", "unreadable (kerneloops)","unreadable (speech-dispatcher)","Restores DNS" (should this one stay on?) 8) Install Apparmor_profiles 9) Install clamav & freshclam (antivirus and update) What did I do wrong and what should I do more to secure this Linux machine? Thanks a lot in advance

    Read the article

  • What is the difference between running a Windows service vs. running through shell?

    - by Zack
    I am trying to troubleshoot an issue on a Windows 2008 server where running attempting to connect to a "Timberline Data Source" ODBC driver crashes if the call is in a "service" context, but succeeds if the call is initiated manually in a Remote Desktop session. I have set the service to run as my user. I'm wondering if, all else being equal (user, machine, etc), are there any fundamental security/environment differences between running a process as a service vs manually? --- Implementation Details --- In case it is helpful for anyone, I had a system that started as an attempt to connect to a Timberline Database using ODBC and a Python CGI script called via IIS 7. The script itself works fine, however, as soon as I attempt to perform the ODBC connect function, the script crashes without throwing an exception. The script was able to connect fine when executed via command line. The same thing happened when using a C#/.net service, attempting to run via Apache, Windows Scheduler or even a 3rd party scheduling tool. With the last option (the 3rd party scheduling tool, pycron) I set the service up log in as my user and had the same issue (I confirmed via Task Manager that the process running user was, in fact, me). It just doesn't make sense to me why a service, which should be running as my user, appears to still be operating in a different security context or environment. Also, if it's important, the Timberline database is referenced by computer name on the network ("\\timberline-server\Timberline Office\Accounts\AT" or something to that effect) I also realized that, as Joel pointed out, the server DOES have a mapped drive ("Y:" which is mapped to "\\timberline-server\Timberline Office") The DSN is set up at the "System DSN" level which, according to the ODBC Administration Tool, means that the DSN is available to users and services Since I'm not allowed to answer this question yet, I'll post the solution that I arrived on: As Joel Coel mentioned, there actually was a mapped drive scenario. I didn't realize this because the DSN specified a path using UNC. However, it seems as though the actual Timberline Driver referred to a mapped drive. Since services don't start with the mapped drive, I was forced to add the drive mapping code into my service. Since it was written in python, I used code from a Stackoverflow answer that was able to map the drive on the fly.

    Read the article

  • Server to server replication and CPU and 32k\ corrupt doc

    - by nick wall
    Summary: if database contains a doc with 32K issue or corrupt, on server to server replication it causes marked increase in CPU in nserver.exe task, which effectively causes our server(s) to slow right down. We have a 5 server cluster (1 "hub" and 4 HTTP servers accessed via reverse proxy and SSO for load balancing and redundancy). All are physically located next to each other on network, they don't have dedicated network\ ports for cluster or replication. I realise IBM recommendation is dedicated port for cluster. Cluster queues are in tolerance and under heavy application user load, i.e. the maximum number of documents are being created, edited, deleted, the replication times between servers are negligible. Normally, all is well. Of the servers in the cluster, 1 is considered the "hub", and imitates a PUSH-PULL replication with it's cluster mates every 60mins, so that the replication load is taken by the hub and not cluster mates. The problem we have: every now and then we get a slow replication time from the hub to a cluster mate, sometimes up to 30mins. This maxes out the nserver.exe task on the "cluster mate" which causes it to respond to http requests very slowly. In the past, we have found that if a corrupt document is in the DB, it can have this affect, but on those occasions, the server log will show the corrupt doc noteId, we run fixup, all well. But we are not now seeing any record of corrupt docs. What we have noticed is if a doc with the 32K issue is present, the same thing can happen. Our only solution in that case is to run a : fixup mydb.nsf -V, which shows it is purging a 32K doc. Luckily we run a reverse proxy, so we can shut HTTP servers down without users noticing, but users do notice when a server has the problem! Has anyone else seen this occur? I have set up DDM event handlers for many of the replication events. I have set the replication time out limit to 5 mins (the max we usually see under full user load is 0.1min), to prevent it rep'ing for 30mins as before. This ia a temporary work around. Does anyone know of a DDM event to trap the 32K issue? we could at least then send alert. Regarding 32K issue: this prob needs another thread, but we are finding this relatively hard to find the source of the issue as the 32K event is fairly rare. Our app is fairly complex, interacting with various other external web services, with 2 way data transfer. But if we do encounter a 32K doc, we can't look at field properties, so we can't work out which field has issue which would give us a clue as to which process is culprit. As above, we run a fixup -V. Any help\ comments on this would be gratefully received.

    Read the article

  • Need help recovering a corrupt SQL database

    - by user570079
    I have a very special case that I have been working on for several days. I have a very large SQL Server 2008 database (about 2 TB) that contains 500 filegroups to support very large partitioned tables. Recently we had a catastophic failure on one of the drive and lost several filegroups and the database became in-accessible. We have been doing filegroup backups on a daily basis, but due to other issues, we lost our most recent backup of the log and the primary filegroup. We have all the data backed up but the primary filegroup backup is old. There have been no schema changes since the primary filegroup backup, but the lsn's are now all out of sync and we cannot recover the data. I have tried everything I could think of (and have tried just about every trick and hack I could google) but I still end up at the same point where I get messages saying that the files for filegroup x do not match the primary filegroup. I am now at the point of trying to edit the system tables (we have a separate temporary environment to do this so we are not worried about corrupting any production databases). I have tried updated sys.sysdbreg, sys.sysbrickfiles, and sys.sysprufiles to try to trick SQL into thinking all the files are online, but a "Select * From OPENROWSET(TABLE DBPROP, 5)" shows a different database state from what I see in sys.sysdbreg. I am now thinking I need to somehow edit the headers of the actual data files to try to line up the lsn's with the primary. I appreciate any help anyone can give me here, but please do not respond with things like "you are not supposed to do edit mdf, ndf files...." or "see msdn article....", etc. This is an advanced emergency case and I need a real hack so we can just get to the data in this corrupt database and export to a fresh new database. I know there is a way to do this, but not knowing what the DBPROP system functions does (i.e. does it look at system tables or does it actually open the file) is keeping me from trying to figure out how to fool SQL into allowing me to read these files. Thanks for any help.

    Read the article

  • My USB bootable thumb drive, no longer boots on a single particular computer on which it previously worked

    - by LiamMeron
    I created a bootable drive, booting CrunchBang, about 2 months ago. About a month ago, I booted into it on another laptop. After shutting down, I have been unable to get it to boot on my own laptop, despite having worked previously. I can still boot into it on my desktop, and can also boot into it on all other computers that I have tried. If I plug it in when I am running Ubuntu, the Home and / folders mount, the only error being that for some reason my PC likes to try and mount the swap partition too, which naturally, gives an error. The BIOS settings are all still setup to boot from USB. When I boot, all I get is a black screen with the white cursor, it will stay there for as long as I leave it. If it is worth anything, I have GRUB loader installed on the drive. The partitions look a little odd, but I am rather unfamiliar with how they are dealt with. The first partition, /, is sdb1 and has the bootable flag. The second partition is an extended system, and is sdb2. The third partition, according to GParted, seems to be nested under the second. This is the swap partition, and it is sdb5 The fourth partition is my home partition and is sdb6 and is also nested under the extended system. The first and fourth partitions are ext4. I don't know if that helps, but the more info, the better accuracy, generally. Thanks. EDIT: I tried reinstalling GRUB on the drive, but that didn't work. However, when I reinstalled GRUB on my laptop, I did it with my USB thumbdrive in. This caused the GRUB updater to find the /boot folder and add the proper details into my laptop's GRUB loader. I could log into CrunchBang from my laptop's GRUB but I was still unable to boot directly to the drive. It looked like my BIOS is unable to find the bootloader. I am unable to install GRUB to a partition I just created, a /boot partition at the start of the drive, GRUB just doesn't allow it. I think I'm going to have to reinstall #! on my drive, which won't be a great loss as I haven't put much time into it.

    Read the article

  • Galaxy Tab 3 producing continuous LightSensor error in LogCat

    - by Richard Tingle
    I am using a Galaxy Tab 3 as a test device for writing an android app. As such I'm interested in the output of the LogCat which is being filled with these error level messages. The device itself appears to work correctly, apps which rely on the light sensor correctly respond to it and the number in the error itself goes down if the light sensor is obscured. If I wasn't using it to develop apps I wouldn't even be aware of the issue but I believe it is an issue with the device itself not my app: simply plugging the tab 3 into the computer and using Eclipse - ADT to look at the LogCat without any app running leads to these errors being shown. I know I could filter the LogCat to ignore these errors but inconvenience aside; they concern me. A sample of the log cat is below (it generates errors continuously). This is on verbose so it includes some debug level (D/) messages as well as the error level messages (E/). How can I correct the device to no longer generate these errors. 06-11 10:08:45.789: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:45.992: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:46.195: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:46.398: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:46.601: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:46.804: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:47.007: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:47.210: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:47.414: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:47.617: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:47.820: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:48.023: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:48.039: D/dalvikvm(15201): GC_CONCURRENT freed 1947K, 17% free 16973K/20359K, paused 13ms+13ms, total 50ms 06-11 10:08:48.226: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:48.429: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 13 06-11 10:08:48.632: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 13 06-11 10:08:48.632: D/STATUSBAR-NetworkController(472): refreshSignalCluster: data=0 bt=false 06-11 10:08:48.835: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:49.039: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:49.242: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:49.445: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 13 06-11 10:08:49.632: D/STATUSBAR-NetworkController(472): refreshSignalCluster: data=0 bt=false 06-11 10:08:49.648: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:49.851: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 13

    Read the article

< Previous Page | 709 710 711 712 713 714 715 716 717 718 719 720  | Next Page >