Search Results

Search found 20359 results on 815 pages for 'fixed length record'.

Page 544/815 | < Previous Page | 540 541 542 543 544 545 546 547 548 549 550 551  | Next Page >

  • Postfix message ID originating process?

    - by Anders Braüner Nielsen
    Last night my postfix mail server(Debian Squeeze with dovecot, roundcube, opendkim and spamassassin enabled) started sending out spam from a single domain of mine like these: $cat mail.log|grep D6930B76EA9 Jul 31 23:50:09 myserver postfix/pickup[28675]: D6930B76EA9: uid=65534 from=<[email protected]> Jul 31 23:50:09 myserver postfix/cleanup[27889]: D6930B76EA9: message-id=<[email protected]> Jul 31 23:50:09 myserver postfix/qmgr[7018]: D6930B76EA9: from=<[email protected]>, size=957, nrcpt=1 (queue active) Jul 31 23:50:09 myserver postfix/error[7819]: D6930B76EA9: to=<[email protected]>, relay=none, delay=0.03, delays=0.02/0/0/0, dsn=4.4.2, status=deferred (delivery temporarily suspended: lost connection with mta5.am0.yahoodns.net[66.196.118.33] while sending RCPT TO) The domain in question did not have any accounts enabled but only a catchall alias set through postfixadmin - most emails were send from a specific address I use frequently but some were also sent from bogus addresses. None of the other virtual domains handled by postfix were affected. How can I find out what process was feeding postfix/sendmail or more info on where they originated? As far as I can tell php mail() wasn't used and I've run several open relay tests. I did a little tinkering(removed winbind from the server and ipv6 addresses from main.cf) after the attack and it seems to have subsided but I still have no idea how my server was suddenly sending out spam. Maybe I fixed it - maybe I didn't. Can anyone help figuring out how I was compromised? Anywhere else I should look? I've run Linux Malware Detect on recently changed files but nothing found.

    Read the article

  • kill a hung mount process

    - by John P
    I have a virtual machine drive that ran out of space, so I shutdown the VM, extended the volume using lvextend. After resizing the partition (ext3), I ran e2fsck on it, and it found and corrected errors. Unfortunately, when I ran efsck one more time, there were more errors that had to be fixed. I went through 3 rounds of e2fsck before I decided to try mounting it to clean up some space manually. I tried mounting it, but the mount process hung. I tried to "kill -9" the mount process, but that did not kill it. I killed the parent process, but that did not kill it either. Any ideas on how to kill a rogue mount process? Some evidence: ps -l 13292 F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD 4 R 0 13292 1 99 85 0 - 17964 - ? 11:27 mount /dev/mapper/xen7-123p3 /tmp/p3/ lsof -p 13292 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME mount 13292 root cwd DIR 9,2 4096 25264129 /root mount 13292 root rtd DIR 9,2 4096 2 / mount 13292 root txt REG 9,2 61656 2916434 /bin/mount mount 13292 root mem REG 9,2 144776 31457282 /lib64/ld-2.5.so mount 13292 root mem REG 9,2 1718232 31457284 /lib64/libc-2.5.so mount 13292 root mem REG 9,2 23360 31457291 /lib64/libdl-2.5.so mount 13292 root mem REG 9,2 43808 31457783 /lib64/libblkid.so.1.0 mount 13292 root mem REG 9,2 247496 31457331 /lib64/libsepol.so.1 mount 13292 root mem REG 9,2 95464 31457337 /lib64/libselinux.so.1 mount 13292 root mem REG 9,2 154640 31457491 /lib64/libdevmapper.so.1.02 mount 13292 root mem REG 9,2 17936 31457472 /lib64/libuuid.so.1.2 mount 13292 root mem REG 9,2 56438208 12684878 /usr/lib/locale/locale-archive mount 13292 root 0u CHR 136,11 0t0 13 /dev/pts/11 (deleted) mount 13292 root 1u CHR 136,11 0t0 13 /dev/pts/11 (deleted) mount 13292 root 2u CHR 136,11 0t0 13 /dev/pts/11 (deleted) umount -f /tmp/p3/ umount2: Invalid argument umount: /tmp/p3/: not mounted

    Read the article

  • "Enter/Return" Key with Word Mobile on Windows Mobile

    - by Maarx
    Some of our employees are using PDAs running Windows Mobile. I wish I could provide more data regarding versions, but frankly these things aren't my jurisdiction. Someone's simply come to me looking for what they thought would be a quick fix. They're using Word Mobile and the barcode scanner to record large volumes of data. The scanner's default action is to insert the scanned text exactly as if it had been input with the keyboard, and puts a newline at the end. That's great, because it's exactly what we need it to do: separate data with newlines. The issue comes when system can't read the barcode, and the employee has to type in the data by hand. They've discovered a very peculiar quirk of Mobile applications: pressing the hardware Enter/Return key on the keyboard appears to save and exit the application. How do we change this behavior? They've realized that using the stylus to "click" the virtual on-screen keyboard's Enter/Return key will add the necessary newline, but it's a huge inconvenience for them. How do I fix the default behavior of the Enter/Return key for Word Mobile to instead insert a newline?

    Read the article

  • Why is the wrong name server information at crsnic.net & gtld-servers.net ?

    - by danorton
    Did I screw this up? I don’t even know how this might have happened, so I’d like to learn. I’m trying out HostGator’s reseller service and I bought a domain name through it, but I didn’t want the default name servers and so I changed them during the registration. After registration the domain name record is correct everywhere except at whois-servers.net and whois.crsnic.net and it looks like the DNS network is using that same information. $ whois -h whois.enom.com. example.com ... Name Servers: dns1.name-services.com dns2.name-services.com dns3.name-services.com dns4.name-services.com dns5.name-services.com ... $ whois -h whois.crsnic.net. example.com Domain Name: EXAMPLE.COM Registrar: ENOM, INC. Whois Server: whois.enom.com Referral URL: http://www.enom.com Name Server: NS1.HOSTGATOR.COM Name Server: NS2.HOSTGATOR.COM Status: clientTransferProhibited Updated Date: 01-jun-2010 Creation Date: 31-may-2010 Expiration Date: 31-may-2011 >>> Last update of whois database: Tue, 01 Jun 2010 19:20:47 UTC <<< ... $ dig +norecurse @b.gtld-servers.net. example.com. NS ... ;; AUTHORITY SECTION: example.com. 172763 IN NS ns2.hostgator.com. example.com. 172763 IN NS ns1.hostgator.com. ... My next step is to let HostGator have a look, but first I want to better understand how this happened. Thanks.

    Read the article

  • How do I enable additional debugging output from Ansible and Vagrant?

    - by Brian Lyttle
    I'm investigating Ansible for server and application provisioning. My application is currently provisioned with shell scripts in Vagrant. Rather than rewrite my scripts I've taken a sample and attempted to deploy it. It appears to deploy fine, but I've seeing a failure message after what looks like a series of successful steps: » vagrant provision ~/vm/blvagrant 1 ? [default] Running provisioner: ansible... PLAY [web-servers] ************************************************************ GATHERING FACTS *************************************************************** ok: [192.168.9.149] TASK: [install python-software-properties] ************************************ ok: [192.168.9.149] => {"changed": false, "item": ""} TASK: [add nginx ppa if it ubuntu 10.04 and up] ******************************* ok: [192.168.9.149] => {"changed": false, "item": "", "repo": "ppa:nginx/stable", "state": "present"} TASK: [update apt repo] ******************************************************* ok: [192.168.9.149] => {"changed": false, "item": ""} TASK: [install nginx] ********************************************************* ok: [192.168.9.149] => {"changed": false, "item": ""} TASK: [copy fixed init for nginx] ********************************************* ok: [192.168.9.149] => {"changed": false, "gid": 0, "group": "root", "item": "", "mode": "0755", "owner": "root", "path": "/etc/init.d/nginx", "size": 2321, "state": "file", "uid": 0} TASK: [service nginx] ********************************************************* ok: [192.168.9.149] => {"changed": false, "item": "", "name": "nginx", "state": "started"} TASK: [write nginx.conf] ****************************************************** ok: [192.168.9.149] => {"changed": false, "gid": 0, "group": "root", "item": "", "mode": "0644", "owner": "root", "path": "/etc/nginx/nginx.conf", "size": 1067, "state": "file", "uid": 0} PLAY RECAP ******************************************************************** 192.168.9.149 : ok=8 changed=0 unreachable=0 failed=0 Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. How do I go about getting additional debug information? I've already added ansible.verbose = true to my vagrant config which results in the dictionaries being displayed within the output above.

    Read the article

  • What's needed in a complete ASP.NET environment?

    - by Christian W
    We have a ASP3.0 application with a few ASP.NET (2.0) dittys mixed in. (Our longtime goal is to migrate everything to ASP.NET but that's not important for this issue) Our current test/deploy workflow is like this: 1 Use notepad++ or VS2008 to fix a bug/feature (depending on what I have open) 2 Open my virtual test-server 3 Copy the fixed file over, either with explorer, or if I can be bothered to open it, WinMerge 4 Test that the fix works 5 Close the virtual test-server 6 Connect to our host with VPN 7 Use WinMerge to update the files necessary 8 Pray to higher powers that the production environment is not so different that something bombs. To make things worse, only I have access to my "test-server". So I'm the only one testing it. I really want to make this a bit more robust, I even have a subversion setup running. But I always forget to commit changes... And I don't even work in my checked out folder, but a copy of what is currently in production... Can someone recommend some good reading on deploying, testing, staging and stuff like that. I currently use VS2008 and want to use subversion or GIT (or any other free VCS). Since I'm the only developer, teamsystem is not really an option (cost-related). I have found myself developing an "improved" feature, only to find a bug in the same feature in the production system. And since my "improved" feature incorporated deleting some old functionality, I have to fix bugs directly in production... That's not a fun feeling... (I have inherited this system recently... So it's not directly my fault that it is like this ;) )

    Read the article

  • Hardware chose: ASUS Eee Pad Slider or ASUS Eee Pad Transformer for web development?

    - by JamesM
    I was just wondering out of the following Tablets which one seams better to get? I am a web-developer, Always using Unix/Linux/BSD, I want a tablet that has a keyboard. http://gdgt.com/asus/eee/pad/slider/ http://gdgt.com/asus/eee/pad/transformer/ http://www.tweaktown.com/news/18311/asus_eee_pad_slider_transformer_tablets_with_physical_keyboard/index.html I know both are similar, but not sure what one I should get. The Slider seems very nice but again the keyboard is fixed to the tablet unlike the Transformer. P.S: I'm going to use one of the above to showcase my programming work at school, as well as just being used as a cheaper notebook than the $300 Windows.7 locked down notebooks. By Locked down, I mean we pay $300 for them and after 3 years we can do what ever to them, they are Lenovo thinkpad mini-10 and What they have installed is all you get, they don't let us install what ever OS on them. And with the question on both of those links, I think that the transformer would be better but that is only taking in the fact of it being both a tablet and a notebook. What I really care about is power; which one is more powerful? It will be running kFreeBSD-Debian-Squeeze with Linux-Mint theme with several other packages. Though I'm not going to run Windows (which I feel is bloated), I still want power. To help keep my computer from slowing down with cache, I will have a cron.d/hourly script cleaning out the cache memory.

    Read the article

  • Mouse clicks suddenly stopped working in Ubuntu

    - by DisgruntledGoat
    This is a weird one. For some reason, last night my mouse partially stopped working. Movement is fine, but the mouse buttons don't work. Mainly it's the left button, but occasionally the right click and scroll-wheel fail too. Initially I thought it could be the mouse itself (the left button seemed to get a bit "soft" recently), but I tried another mouse and had the same issue. Both are USB wireless optical mice. The keyboard is working okay 95%, only problem is Alt+Tab doesn't seem to work. Both keys work fine independently. At the time it happened I was using Chrome, I dragged to scrollbar to scroll and when I released the mouse it was still holding scrollbar. I'm using Ubuntu 9.10, I upgraded weeks ago and everything was working fine so I don't think it's related to that. I also hadn't run any updates (I have now just in case something fixed it). But no luck. Any ideas?

    Read the article

  • Sudden and frequent hangs on desktop computer: mobo or CPU fault?

    - by djechelon
    I have a desktop computer equipped with an ASUS Crosshair 2 Formula and a Phenom x6 3.2GHz CPU. My problem is that often the computer will hang all of a sudden, completely stopping responding. When that occurs, reset key is inoperative and power button turns the computer off but is unable to turn it back on. I have to physically disconnect power cable. The problem can occur anytime, when I'm booting Windows, when I'm logging in, when I'm listening to a song, when I'm browsing Internet, etc. It always occurs after very few minutes of 3D gameplay I thought it was a video card fault. I had 3 8800GTX so I could try all combinations of them: didn't fix I thought it was a RAM problem: I tried running with only a subset of my DDR2 banks but didn't fix. Almost every time I have to reset and reconfigure BIOS (without AHCI, Win7 won't boot, so I need to restore a few things). If I enable AMD Live, Cool&Quiet or other things from CPU configuration menu I'll be sure that the computer won't reach Windows desktop in 99% of cases (it randomly hangs somewhere in the boot process or even in the BIOS POST). Another interesting thing is that during the POST process the computer always takes unusually long time detecting USB devices (LCD POSTer shows USB INIT), and I've also tried disconnecting all USB devices but didn't take less time to POST BIOS revision is 2702, the latest. Today I found a different behaviour once: during boot screen I got a BSOD with error Stop 0x00000101 A clock interrupt was not received on a secondary processor within the allocated time interval, and this is usually related to overclocking, but I never overclocked my CPU. Judging from the description of my problem, hoping someone had the same and fixed, and since I don't have a spare CPU or motherboard for replacement, I'd like to ask if you think this is a problem with faulty CPU or faulty motherboard, and if I can perform additional tests (I mean software tests because of my lack of spare components) to identify the component to replace.

    Read the article

  • Windows7 corrupted profile - prevention exists?

    - by Radek
    I have dedicated Windows7 (not on domain) virtual machine for overnight automation testing. Some commands (mySQLdump, tscon.exe) must be run under administrator account. Last week administrator account's profile was corrupted. I fixed it by renaming it in the registry and logging in as administrator. And today it is corrupted again. I use administrator account only to run above commands via runas. Also the computer is restarted via cmd - shutdown command - quite often. Especially every night before automation testing starts. I checked the comp for viruses - did full scan using avast although I believed that the comp is clean. Any idea how to prevent the profile to get corrupted again? update So the first log entry in event log is today from 1.15am and one of my scripts ran runas command as administrator exactly at 1.15am. It was second time that runas war executed though after the testing started. The same happened second day in a row. Before the testing starts I need to copy one file that is locked. So I run handle.exe from runas to unlock it. That is what I think causing the profile to get corrupted. I am not able to reproduce it by myself. The message from event viewer is Windows cannot load the locally stored profile. Possible causes of this error include insufficient security rights or a corrupt local profile. DETAIL – The process cannot access the file because it is being used by another process.

    Read the article

  • webmin's bind & nameserver - NS doesn't resolve

    - by user127518
    Got a problem setting up a nameserver. Here are the details: domain http://stagingtestserver.com.au/ (I did some updates and now www.stagingtestserver.com.au won't resolve) I got some errors in www.intodns.com/stagingtestserver.com.au as well. I could not ping ns1. and ns2. also. This is the record file under /var/named/stagingtestserver.com.au.hosts: $ttl 38400 stagingtestserver.com.au. IN SOA ns1.stagingtestserver.com.au myemail\.here.gmail.com. ( 1341370630 10800 3600 604800 38400 ) stagingtestserver.com.au. IN A 202.4.229.161 www.stagingtestserver.com.au. IN A 202.4.229.161 ftp.stagingtestserver.com.au. IN A 202.4.229.161 m.stagingtestserver.com.au. IN A 202.4.229.161 localhost.stagingtestserver.com.au. IN A 127.0.0.1 webmail.stagingtestserver.com.au. IN A 202.4.229.161 admin.stagingtestserver.com.au. IN A 202.4.229.161 mail.stagingtestserver.com.au. IN A 202.4.229.161 stagingtestserver.com.au. IN MX 5 mail.stagingtestserver.com.au. ns1.stagingtestserver.com.au. IN A 202.4.229.161 ns2.stagingtestserver.com.au. IN A 202.4.229.172 stagingtestserver.com.au. IN NS ns1.stagingtestserver.com.au. stagingtestserver.com.au. IN NS ns2.stagingtestserver.com.au. Any thoughts, guys? Thanks and I appreciate all your thoughts/help/(ahem violent) reactions? :)

    Read the article

  • squid bypass for a domain

    - by krisdigitx
    i am using squid with adzap, it possible that squid/adzap does not cache for a particluar domain eg. cnn.com this is my squid.conf file # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 #acl localhost src ::1/128 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 #acl to_localhost dst ::1/128 # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl localnet src 192.168.1.0/24 acl localnet src 192.168.2.0/24 acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user #http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localnet http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port xxx.xxx.xxx.yyy:3128 transparent visible_hostname proxyserver.local # We recommend you to use at least the following line. hierarchy_stoplist cgi-bin ? # Uncomment and adjust the following to add a disk cache directory. cache_dir ufs /var/spool/squid 1024 16 256 # Leave coredumps in the first cache dir coredump_dir /var/spool/squid # Add any of your own refresh_pattern entries above these. refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 access_log /var/log/squid/squid.log squid access_log syslog squid redirect_program /usr/local/adzap/scripts/wrapzap fixed using acl allow_domains dstdomain www.cnn.com always_direct allow allow_domains

    Read the article

  • Exchange 2010 CAS Removal == Broken???

    - by Doug
    Hi there, I recently upgraded to exchange 2010 and have a setup with 2 of my servers running CAS roles - EXCH01, EXCH02 EXCH02 just happens to also have a mailbox role where a lot of the users sit EXCH01 is my front facing CAS server, and is facing the net with SSL etc and incoming mail moving through it as a hub transport layer server as well. As i was trying to lean things out in my VM environment i removed the CAS role from EXCH02 and all hell broke loose. All the mail users that have a mailbox on EXCH02 had their homeMTA set to a deleted items folder in AD and so did their msExchHomeServer properties. After a complete battle i manually fixed these issues to the oldvalues, and in the mean time reinstalled CAS on EXCH02 (management was going nuts with out OUTLOOK working so i just put things back the way they were in a hurry.) I must add as a strange thing on the side, that before i reset these to point at EXCH02 i tried EXCH01 and it failed. I still want to remove the CAS role from EXCH02 as it should really not have it (error on install/planning on my part) and would have thought that this would not cause the issues it did, i assumed that the fact that there was another CAS server in the admin group all would be good. Was i wrong in my assumption? and what can i do to complete this successfully the second time round? Do i need to rehome all the mailboxes to the CAS server? is this a bug in the role uninstall?

    Read the article

  • Subdomain not hitting new site?

    - by Abe Miessler
    I have an existing site at my.site.com and I would like to setup a staging subdomain (staging.my.site.com). I have my DNS setup and directing staging.my.site.com to my server, but for some reason the Web Site I created in IIS for it is not being hit. Instead when I go to staging.my.site.com it takes me to the original site. The site I created in IIS has a home directory that is totally different from the regular sites home directory. I have added one host header with the following information: IP Address: (All Unassigned) TCP port: 80 Host Header Value: staging.my.site.com I was under the impression that with the setup I described above hitting staging.my.site.com through a web browser would bring up the staging site, but it does not. Can anyone see what I am doing wrong? UPDATE: One thing I noticed is that in my A record I am mapping to the IP Address ( lets say 1.1.1.1 for this example). In the host headers for the main site (my.site.com) it has 1.1.1.1 as an entry. Is this normal? Could it cause the problem I am talking about?

    Read the article

  • What's the piece of hardware listening on Facebook's or Wikipedia's IP address?

    - by Igor Ostrovsky
    I am trying to understand how massive sites like Facebook or Wikipedia work, for my intellectual curiosity. I read about various techniques for building scalable sites, but I am still puzzled about one particular detail. The part that confuses me is that ultimately, the DNS will map the entire domain to a single IP address, or a handful of IP addresses in the case of round-robin DNS. For example, wikipedia.org has only one type-A DNS record. So, people from all over the world visiting Wikipedia have to send a request to the one IP address specified in DNS. What is the piece of hardware that listens on the IP address for a massive site, and how can it possibly handle all the load coming from the requests for users all over the world? Edit 1: Thanks for all the responses! Anycast seems like a feasible answer... Does anyone know of a way to check whether a particular IP address is anycast-routed, so that I could verify that this really is the trick used in practice by large sites? Edit 2: After more reading on the topic, it appears that anycast is not typically used for dynamic web content. Anycast is usually used for UDP (e.g., DNS lookups), or sometimes for static content. One interesting thing to note is that Facebook uses profile.ak.fbcdn.net to host static content like style sheets and javascript libraries. Each time I ping this name, I get a response from a different IP address. However, I can't tell whether this is anycast in action, or a completely different technique. Back to my original question: as far as I can tell, even a large site will have a single expensive piece of load-balancing hardware listening on its handful of public IP addresses.

    Read the article

  • Splunk is fantastically expensive: What are the alternatives?

    - by samsmith
    This has been discussed, but it has been several months, so it may be time to revisit it: Earlier discussion RE Splunk alternatives For the record, Splunk rocks. But the pricing is simply beyond what we can consider (When I spoke with Splunk today, the cost for a system to index 5gb/day of data is over $30,000.) That is more than we spend on SQL Server (by a large multiple), more than we spend on a rack of servers (by a multiple), etc. etc. The splunk sales team is correct (that for $30K we get more value and functionality than if we spend the same building our own system), but it doesn't matter. The splunk cost is simply too high (by a multiple). Soooooo, we are looking around! Is anyone out there building a splunk like system? Our basic need: Able to listen for syslog messages on multiple udp ports Able to index the incoming data in an async way Some kind of search engine Some kind of UI An API to the search engine (to embed in our console) We currently need to index 3-5gb/day, but need to be able to scale to 10gb/day or more. We do not need a lot of history (30 days is fine). We use Windows 2008 and 2003 servers. Thanks for your thoughts!

    Read the article

  • Oracle: Getting ORA-01195 and ORA-01110 when attempting resetlogs

    - by MacAnthony
    I am trying to get our database to startup. When I login to sqlplus and do a startup, I get the message: Total System Global Area 534462464 bytes Fixed Size 2215064 bytes Variable Size 331350888 bytes Database Buffers 192937984 bytes Redo Buffers 7958528 bytes Database mounted. ORA-01589: must use RESETLOGS or NORESETLOGS option for database open So I do a shutdown, startup mount (which works fine) and then run: SQL> alter database recover using backup controlfile until cancel; alter database recover using backup controlfile until cancel * ERROR at line 1: ORA-00283: recovery session canceled due to errors ORA-19909: datafile 1 belongs to an orphan incarnation ORA-01110: data file 1: '/<path>/system01.dbf' SQL> alter database open resetlogs; alter database open resetlogs * ERROR at line 1: ORA-01195: online backup of file 1 needs more recovery to be consistent ORA-01110: data file 1: '/<path>/system01.dbf' I know I've used instructions to get me past this error before, but I seem to be having trouble tracking it down. A bit of history: We wanted to refresh the data in this from another db so we attempted to do a expdb/impdb into this instance. The impdb did not complete correctly and got an end of file error message in it and hung (I still have the message in a log if it's important). Since the instance would start at this point, we decided to use the hotbackup process we have to restore the db. The hotbackups are from another server/instance. We went through the same process 2 weeks ago. At the point of recreating the control file is where we got to the issue above.

    Read the article

  • Synchronize two directories on linux pc

    - by Gab
    I need a distributed filesystem (or a synchronization tool) that is capable of keeping a directory synchronized across 4 pc. My requirements are: offline access (data must be available offline on each pc) preserve execution rights: some files are marked executable on a linux partition. This flag should be replicated. efficient sync strategy: some of my files are 20GB, they are changed quite often, but in very little parts (Virtualbox images). Delta transmissions are welcome. efficient handling of space: no history for files, files shouldn't be copied to temp directories "just in case you break it". it must propagate deletions of files modification can happen in any of the 4 pcs, they should be propagated when other pc are connected. Other specs of my solution are: Sync is over a lan, the total amount of data to be synced is around 180GB, in some ten thousand files. Changes are small, but can happen in big files. At the moment i'm interested in a linux only solution. conflicts either don't happen or are solved with "last one wins" I haven't found any good solution. I've been trying: unison: it is the only one working at the moment, but during the hashing phase it hangs my pc for some minute, disk light steady on. Sparkleshare doesn't handle large files nicely. It keeps an history of all your changes that grows up indefinitely. They promise it will be fixed in next releases, but at the moment it still doesn't fit my needs. owncloud (keeps history of each file i change) coda ? (help! i couldn't set it up correctly!) git-annex assistant transforms all your files in symlinks and mark the original file as read only ("just in case you make a mistake while you modify it"!). Before you edit a file you have to issue a special command "git-annex unlock", that creates a local copy of the file, and you have to remember to lock it again if you want it synchronized. What to try next?

    Read the article

  • KVM Hosting: How to efficiently replicate guests

    - by javano
    I have three KVM servers each with 1 guest VM, running directly on it's local storage, (so they are essentially getting a dedicated box worth of computing power each). In the event of a host failure I would like the guests replicated to at least one of the other hosts so I can spin it up there, until the failing host is fixed. I am curious about KVM cloning. I can clone a VM live or when it's suspended/shutdown. Obivously suspended VMs will naturally be quicker to clone but these three VMs comprise three parts of a single solution, so I don't want to ever have any one of them shutdown. How can I efficiently clone these VMs between servers? I have had a couple of ideas, but are these insane or, is there a better method I have missed for my scenario? Set up a DRDB partition between box 1 and 2 where VM 1 runs from, and so is replicated between box1 and box 2, repeat between box 2 & 3, and box 3 & 1 (This could be insane, I have never used DRDB only read about it) Just use standard KVM CLI clone options to perform live clones (I'm dubious about this because I don't know how long it will take and what the performance impact will be during) Run a copy of each VM on at least one other host, and have the guest on one host export it's data to the matching guest on another host where it can import that data, scripting this on the guest) Some of other way? Ideas welcome! Side Note These servers have 4x15k SAS drives in a RAID 10 so they aren't rocketing fast, and as I mentioned, each VM runs from the host's local storage, no NAS or SAN etc. So that is why I am asking this question about guest replication. Also, this isn't about disaster recovery. Guests will be exporting their data to a NAS over a VPN, so I am looking at how I can have them quickly spun up in a host failure situation.

    Read the article

  • ATI firepro will not detect a second DVI-D monitor

    - by John
    OK so weird issue here. I have previously been running 6 screens off of 3 of the older ATI firepro graphics cards but they had a problem with the heat sink getting too hot and warping the PCB resulting in total failure of the card, to replace my three dead cards I purchased a new-type ATI firepro with the newer heat sink design. I'm only using one at the moment to make sure they've fixed the problem before I waste more money on 2 more cards but this is where things start to get weird. The Firepro's only have one port on them, they connect to two monitors via a splitter cable going from the one port to two DVI connectors for the screens. When I plug two identical monitors in via their DVI inputs not matter what I do windows and Catalyst will only detect one screen. However if I use the VGA input on one of the screens with a VGA - DVI adaptor to plug it in to the card it works fine. This confuses me greatly. I'm currently using the ATI Firepro 2270 Graphics card with identical DELL U2311H screens. I can post the rest of the system spec as well if needed but I wouldn't have thought it would make much difference as it had no problem handling 6 screens before the graphics cards failed. Naturally both catalyst and ATI drivers are the most current version. ATI tech support has been absolutely zero help, they seemed to get stumped as soon as I verified that both screens were plugged in and connected properly. Anyone have any ideas?

    Read the article

  • Internet explorer rejects cookies in kerberos protected intranet sites

    - by remix_tj
    I'm trying to build an intranet site using joomla. The webserver is using HTTP Kerberos authentication with mod_kerb_auth. Everything works fine, the users get authenticated and so on. But if i try to login to the administrator panel i can't because IE does not accept the needed cookies. No such problem with firefox. The intranet site is called "intranet_new" and is hosted by webintranet04, under the directory /var/www/vhosts/joomla/intranet_new/. I have my virtualhost for intranet_new containing this: <Location /> AuthType Kerberos AuthName "Kerberos Login" KrbMethodNegotiate On KrbMethodK5Passwd On KrbAuthRealms PROV.TV.LOCAL Krb5KeyTab /etc/apache2/HTTP.keytab require valid-user </Location> The same is for webintranet04 virtualhost, which is the default pointing to /var/www and contains: <Location /vhosts/joomla/> AuthType Kerberos AuthName "Kerberos Login" KrbMethodNegotiate On KrbMethodK5Passwd On KrbAuthRealms PROV.TV.LOCAL Krb5KeyTab /etc/apache2/HTTP.keytab require valid-user </Location> the very strange problem i have is that if i open http:// webintranet04/vhosts/joomla/intranet_new/administrator IE allows me to login, accepting cookie. If i open http:// intranet_new/administrator, instead, i loop on the login page. Last, intranet_new is a CNAME record of webintranet04. This is only an IE problem. I need: - the admin interface to work with IE - the "kerberized" zone to accept cookie, because i am deploying other programs requiring cookies.

    Read the article

  • Is this way of using Excel 2007 Pivot table for BI scalable ?

    - by Sim
    Hi all, Background: We need to consolidate sales data across the country to do analysis Our Internet connection/IT expertise/IT investment is not quite strong, therefore full BI solution is out of question I tried several SaaS BI solution (GoodData, ZohoReports) and while they're good, they seem not to fully support what we need We're looking at 'bout 2 millions record for every 2 months My current approach Our (10) sites currently gathers data from all their branches and consolidate them into 1 Excel file with Pivot table and embed source data In HQ, I will request 10 sites to send back those Excel files periodically We will import those Excel to our MSSQL server There will be a master Excel file, that will also have the same pivot table (as those came from site Excel file), and datasource is the MSSQL server More details For testing, I currently use MSSQL 2008 Express on my laptop So far, I imported our transactions for the past 2 months and there are 2 millions+ row in 1 table in MSSQL (we just use 1 table, corresponding to our common pivot table structure). DB size is ~ 600 MB In the master Excel file, if not including the source data, it's just < 10MB. Including the source data will increase the size to 60 MB (so I supposed Office 2007 automatically zip the data ?) I try using the Pivot (drag-and-drop fields) and the performance so far is OK (my laptop specs: C2D T7200, 3GB RAM, Windows XP) So my question is : If we're looking at full year transaction (roughly 15 millions rows in MSSQL 2008 Express, 3.6 GB in size), is there any issue with that 15 million rows in 1 table in SQL Express ? Is there any performance issue with the pivot table at that time ? Can it still embed the source data ? (I google-ed but didn't find the maximum size of source data Excel 2007 can embed) Any other suggestions on how we can better do this ? Given that we can't afford the full BI solution, any light-weight/budget/SaaS BI that you can recommend ? Thanks

    Read the article

  • IP6 seems to be enabled - How do I configure it without interfering with IP4?

    - by Mister IT Guru
    I noticed that some of my Centos boxes have IP6 enabled, and seem to have addresses. I have no problem with this, but I would like to get a handle on it, and even connect to them using IP6. This would really help if for any reason DHCP has a hiccup. But I'm a bit lost as to where the configuration on my CentOS box is. (I am also on google researching this, but I like server fault! :) ) I am hoping that I would be able to log into this via the VPN because every now and then that DHCP device has a bad morning, and needs to be restarted. (I'm also looking into this issue, but someone else handles that, management separation gone mad!) It's a remote client, so it would be a lot easier for me to connect to these systems which seem to self configure, to use that as a pivot via ssh tunnels to get to other remote devices to continue to manage them, while out main route is fixed. I guess, my questions are How can I configure IP6 without interfering with IP4, and On CentOS, can I influence this auto configuration I seem to be seeing?

    Read the article

  • Outbound mail issue during Exchange 2003 migration

    - by user27574
    Dear all, I am having an outbound email issue during the Exch 03 migration. Basically, we are migrating Exch03 to new hardware, both servers are Server 03 based. Everything runs smooth while setting up and installing Exch 03 on the new box. Public folders are all replicated. My issues are shown below.... 1) After starting to move users' mailboxes to new Exch 03, they receive some undeliverable mail and bounced back mail from some vendors, then I move few users back to test around, they have no problem at all after moving back to old Exch 03. 2) Another issue is our company has Blackberry users, we don't have BES. Under each user's mailboxes, we have forward rule setup, so that both user inbox and BB can receive email. User who is moved to the new Exch 03 server, they can only send email to the BB user's inbox, mail cannot be forwarded to BB at all, smtp queue stacks up and keep trying until the time is expired. Since not all emails that the users send out from the new Exch have problem, I am not able to narrow down what is the issue here. Can anyone give me some ideas? Could this be MX record / Reversed DNS relate? Or firewall NAT rule setting? Thanks.

    Read the article

  • Unable to ssh to a Linux VM after a day

    - by jogabonito
    I have a machine running 4 VMs on it. There is one Fedora VM which is causing me some trouble. The IPs of the VMs are something like 10.100.100.* I have a Windows PC which is in the same network. It has an IP 10.100.25.77. When I reboot the Fedora VM, I am able to ping it from my Windows PC as well as use putty to ssh to it. The next day, I cant ping it or ssh from my Windows PC. However I can ping and ssh to the other VMs on the machine. If I ssh to one of the other VMs, I can ping and ssh to the Fedora VM. Next if I restart it, things get back to normal and I can access it without any issues. The IP of the VM doesn't change after rebooting and it is statically assigned I would like to know what is causing this and how to get it fixed. As a last resort, I am thinking of running a cron job to restart the VM every night, it is not a critical server, but will be generally used occasionally in the day time.

    Read the article

< Previous Page | 540 541 542 543 544 545 546 547 548 549 550 551  | Next Page >