Search Results

Search found 49789 results on 1992 pages for 'mysql insert id'.

Page 1766/1992 | < Previous Page | 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773  | Next Page >

  • Sharing disk volumes across OpenVZ guests to reduce Package Management Overhead

    - by andyortlieb
    Is it feasible to create a single "master" OpenVZ guest who would only be used for package management, and use something like mount --bind on several other OpenVZ guests sort of trick them into using the environment installed by the master guest? The point of this would be so that users can maintain their own containers, and yet stay in sync with the master development environment, so they'll always have the latest & greatest requirements without worrying too much about system administration. If they need to install their own packages, could put them in /opt, or /usr/local (or set a path to their home directory)? To rephrase, I would like several (developer's, for example) OpenVZ guests whose /bin, /usr (and so on...) actually refer to the same disk location as that of a master OpenVZ guest who can be started up to install and update common packages for the environment to be shared by all of this group of OpenVZ guests. For what it's worth, we're running Debian 6. Edit: I have tried mounting (bind, and readonly) /bin, /lib, /sbin, /usr in this fashion and it refuses to start the containers stating that files are already mounted or otherwise in use: Starting container ... vzquota : (error) Quota on syscall for id 1102: Device or resource busy vzquota : (error) Possible reasons: vzquota : (error) - Container's root is already mounted vzquota : (error) - there are opened files inside Container's private area vzquota : (error) - your current working directory is inside Container's vzquota : (error) private area vzquota : (error) Currently used file(s): /var/lib/vz/private/1102/sbin /var/lib/vz/private/1102/usr /var/lib/vz/private/1102/lib /var/lib/vz/private/1102/bin vzquota on failed [3] If I unmount these four volumes, and start the guest, and then mount them after the guest has started, the guest never sees them mounted.

    Read the article

  • How to set up ProxMox 1.9 on VPN?

    - by Gnudiff
    Disclaimer: I have only rudimentary knowledge of VPNs. I would love to learn about them properly, however, at the moment I really need to make stuff work on short notice. I am trying to set up a ProxMox virtualization platform in an existing network. The network currently consists of several servers which have VMWare free edition. There is some sort of VPN defined in switch. In order for VMWare management interface to be accessible, there needs to be ticked a checkbox in the network settings for VPN and entered the VPN id. I didn't notice any such configuration option during ProxMox installation, so my Proxmox VE on the same physical server, using same manual IP settings (ip/nm/gw), is not accessible. As I understand I should touch the Proxmox's underlying Debian config in /etc/network/interfaces, but I have no idea, what should I aim for: do I specify the settings for eth0, do I make a virtual interface? How to make it accessible for both ProxMox VE and underlying future VMs? I read the ProxMox installation guide, but unfortunately it presumes better understanding of VPNs than I have. A config template or similar would be appreciated. Thanks in advance.

    Read the article

  • "cannot receive new filesystem stream: invalid backup stream" error when unpacking flash archive on solaris 10

    - by Bovril
    I've searched around but i'm having no luck with some peculiar behavior with a flash archive. I'm using HP Server Automation 9.14 to deploy the OS. I'm creating a Solaris 10 flash archive to create a snapshot default build in our environment. I create the flash archive with # flar create -c -S -n g8-solaris10-u10 g8-solaris10-u10.flar It seems to create the file without any problems (exit status 0). When deploying to a new system (same hardware), it extracts to a point and then bails. The last error in the log I can see is Extracted 2047.00 MB ( 82% of 2488.98 MB archive) ERROR: Could not read file (172.27.118.100:/media/opsware/sunos/flar/g8-solaris10-u10.flar ERROR: Errors occurred during the extraction of flash archive. The file /tmp/flash_errors contains the list of errors encountered ERROR: Could not extract Flash archive ERROR: Flash installation failed The error log contained the following message cannot receive new filesystem stream: invalid backup stream A previous version of this flash archive (1.8gb) worked ok, so I suspect size may be a factor. The source system (the one the flash archive is an image of) is an HP BL460C GEN8 some more info below. OS version Info # uname -a SunOS testhostname 5.10 Generic_147441-01 i86pc i386 i86pc # who -r . run-level 3 Oct 15 08:15 3 0 S disks # echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0t0d0 <DEFAULT cyl 17841 alt 2 hd 255 sec 63> /pci@0,0/pci8086,3c06@2,2/pci103c,3355@0/sd@0,0 Specify disk (enter its number): Specify disk (enter its number): zpools # zpool list NAME SIZE ALLOC FREE CAP HEALTH ALTROOT rpool 136G 24.6G 111G 18% ONLINE - Zones # zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared The file size of 2047 seems suspiciously close to 2048, which is concerning. Any help would be greatly appreciated. Thanks

    Read the article

  • Member Status: Inquorate in RHEL 5.6

    - by Eugene S
    I've encountered a strange issue. I had to change the time on my Linux RHEL cluster system. I've done it using the following command from the root user: date +%T -s "10:13:13" After doing this, some message appeared relating to <emerg> #1: Quorum Dissolved (however I didn't capture the message completely). In order to investigate the issue I looked at /var/log/messages and I've discovered these errors. Below is the output of few commands I got when tried to investigate the issue, however I don't have enough knowledge to make use of this information. [root@system1a ~]# clustat Cluster Status for system4081 @ Sun Mar 25 11:45:48 2012 Member Status: Inquorate Member Name ID Status ------ ---- ---- ------ chb_sys1a 1 Online, Local chb_sys2a 2 Offline [root@system1a ~]# cman_tool nodes Node Sts Inc Joined Name 1 M 872 2012-03-25 08:43:07 chb_sys1a 2 X 0 chb_sys2a [root@system1a ~]# qdiskd -f -d [17654] debug: Loading configuration information [17654] debug: 0 heuristics loaded [17654] debug: Quorum Daemon: 0 heuristics, 1 interval, 10 tko, 0 votes [17654] debug: Run Flags: 00000035 [17654] info: Quorum Daemon Initializing stat: Bad address [17654] crit: Initialization failed I tried to search through the internet and found out a quite similar issue here. However, for some reason I am not able to access the bug on bugzilla. The link to the bug is here

    Read the article

  • nameserver spoiling avahi multicast name resolution of .local domain

    - by Doug Coburn
    After trying to ping a machine on my local network, I noticed that I was trying hit address 66.152.109.24. This is an external public address. Resolution should have occurred via avahi mDNS. I ran dig to see how the name resolution worked and my quest/centurylink name server was retuning results for my .local domain queries! I tried a random name and got the same ip address result. $ dig jakdafj.local ; <<>> DiG 9.8.1-P1-RedHat-9.8.1-3.P1.fc15 <<>> jakdafj.local ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 58410 ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;jakdafj.local. IN A ;; ANSWER SECTION: jakdafj.local. 10 IN A 66.152.109.24 jakdafj.local. 10 IN A 204.232.231.46 ;; Query time: 104 msec ;; SERVER: 205.171.3.25#53(205.171.3.25) ;; WHEN: Sat Mar 24 20:40:17 2012 ;; MSG SIZE rcvd: 63 Am I missing something or is my DNS name server at 205.171.3.25 corrupted?

    Read the article

  • how to uninstall ubuntu 8 from ubuntu 10 dual boot

    - by umar
    I have ubuntu 8.04 and ubuntu 10.04 on my laptop, and i want to reclaim all the ubuntu 8 space so that i have just one operating system on my laptop. how can i do it? the output of sudo fdisk -l is as follows: sudo fdisk -l Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x31a431a3 Device Boot Start End Blocks Id System /dev/sda1 * 1 4959 39833136 83 Linux /dev/sda2 4960 5233 2200905 82 Linux swap / Solaris /dev/sda3 5234 12852 61192552 83 Linux /dev/sda4 12852 19458 53062657 5 Extended /dev/sda5 12852 19182 50847744 83 Linux /dev/sda6 19182 19458 2213888 82 Linux swap / Solaris i dont know which of sda1, ..., sda 6 etc ubuntu 8 is on. how can i find that out? The actual task is that i think a lot of space is devoted to ubuntu 8, if there is no easy way to get rid of it, then i want to repartition the disk so that about 50 GB of hard disk space is given to ubuntu 10's home folder from the ubuntu 8's home folder. but i hope that there is an easy way to get rid of ubuntu 8 alrogether and just have ubuntu 10 on my system.

    Read the article

  • Dual booting Linux/Win7, Grub refuses to load Win7

    - by JohnB
    Decided to give Linux Mint a try (Ubuntu's interface annoys me), so I installed it with the intention of dual booting with Windows 7. Installation went fine, but now I can only boot into Linux Mint. Grub lists two Windows 7 menu options, but selecting either of them causes an "unknown file system" error and dumps me into a Grub recovery prompt. There, I have to manually reset the root and prefix options, as they reset hd0,msdos6 when they should be hd0,msdos5. I ran Boot Repair twice, once to fix grub errors, once to rebuild the MBR, but it didn't fix anything. Here is the log: http://paste.ubuntu.com/1029675/ fdisk output: Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 1486249145 743021149 7 HPFS/NTFS/exFAT /dev/sda3 1486249982 1953523711 233636865 5 Extended /dev/sda5 1486249984 1945141247 229445632 83 Linux /dev/sda6 1945143296 1953523711 4190208 82 Linux swap / Solaris grub.cfg: ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Windows 7 (loader) (on /dev/sda1)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set=root 86184D18184D091F chainloader +1 } menuentry "Windows 7 (loader) (on /dev/sda2)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root 56D84F84D84F60FB chainloader +1 } ### END /etc/grub.d/30_os-prober ### I have found a few similar troubleshooting guides so far, but so far no amount of updating/configuring Grub has been successful. Last resort is, I suppose, use the W7 recovery disc and start over. Thanks in advance! Linux Mint 13 Maya, 64-bit Windows 7 Home Edition, 64-bit

    Read the article

  • How do I install the main repositories for RHEL6

    - by eisaacson
    We've setup RHEL6 on a new server. As far as we can tell, our subscription is all setup properly. However, when I run yum repolist, it doesn't show any repositories. /etc/yum.repos.d/redhat.repo is empty. I tried pasting in the content from another RHEL6 server's redhat.repo but as soon as I run yum, it wipes it out again. I just need to get the basic RedHat repositories setup so I can install packages. EDIT: Using the GUI, I went to System Administration Red Hat Subscription Manager. Under the 'Products' tab, it did not show any products. EDIT: When I run yum update, here's what I get: # yum update Loaded plugins: product-id, refresh-packagekit, security, subscription-manager This system is receiving updates from Red Hat Subscription Management. Setting up Update Process No Packages marked for Update When I log in to RedHat customer portal, it shows that subscription as active. EDIT: To make sure I wasn't having a subscription issue. I re-registered and re-subscribed. I get all the same results. # subscription-manager register --force # subscription-manager subscribe --pool=*redacted* EDIT: contents of /etc/yum.conf [main] cachedir=/var/cache/yum/$basearch/$releasever keepcache=0 debuglevel=2 logfile=/var/log/yum.log exactarch=1 obsoletes=1 gpgcheck=1 plugins=1 installonly_limit=3 contents of /etc/yum/pluginconf.d/rhnplugin.conf: [main] enabled = 0 gpgcheck = 1

    Read the article

  • cisco 2900xl - SNMP - Get mac address of device connected to an interface

    - by ankit
    Hello all, Basically what i want to do is to find out what is the mac address of a device plugged in to an interface on the switch (FastEthernet0/1 for example) reading through the switch documentaion i found out that i can configure snmp trap on it to make it notify of any new mac address the switch detects by using the command snmp-server enable traps mac-notifiction but for some reason my switch does not support this feature. the only options i see are CORE_SWITCH(config)#snmp-server enable traps ? c2900 Enable SNMP c2900 traps cluster Enable Cluster traps config Enable SNMP config traps entity Enable SNMP entity traps hsrp Enable SNMP HSRP traps snmp Enable SNMP traps vlan-membership Enable VLAN Membership traps vtp Enable SNMP VTP traps <cr> so the other way would be for me to run a cronjon on my gateway to poll the switch periodically using snmp to get new mac addresses i have looked everywhere but cant seem to find the OID that would provide me this information. any help i can get would me very much appreciated ! here's the output from "show version" on my switch Cisco Internetwork Operating System Software IOS (tm) C2900XL Software (C2900XL-C3H2S-M), Version 12.0(5.4)WC(1), MAINTENANCE INTERIM SOFTWARE Copyright (c) 1986-2001 by cisco Systems, Inc. Compiled Tue 10-Jul-01 11:52 by devgoyal Image text-base: 0x00003000, data-base: 0x00333CD8 ROM: Bootstrap program is C2900XL boot loader CORE_SWITCH uptime is 1 hour, 24 minutes System returned to ROM by power-on System image file is "flash:c2900XL-c3h2s-mz.120-5.4.WC.1.bin" cisco WS-C2912-XL (PowerPC403GA) processor (revision 0x11) with 8192K/1024K bytes of memory. Processor board ID FAB0409X1WS, with hardware revision 0x01 Last reset from power-on Processor is running Enterprise Edition Software Cluster command switch capable Cluster member switch capable 12 FastEthernet/IEEE 802.3 interface(s) 32K bytes of flash-simulated non-volatile configuration memory. Base ethernet MAC Address: 00:01:42:D0:67:00 Motherboard assembly number: 73-3397-08 Power supply part number: 34-0834-01 Motherboard serial number: FAB040843G4 Power supply serial number: DAB05030HR8 Model revision number: A0 Motherboard revision number: C0 Model number: WS-C2912-XL-EN System serial number: FAB0409X1WS Configuration register is 0xF thanks, -ankit

    Read the article

  • Redis 2.0.3 would not let go of deleted appendonly.aof file after BGREWRITEAOF

    - by Alexander Gladysh
    Ubuntu 10.04.2, Redis 2.0.3 (more details at the end of the question). My AOF file for Redis is getting too large, to the point where it soon would threaten to take whole free disk space on my small-HDD VPS box: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda 32G 24G 6.7G 78% / $ ls -la total 3866688 drwxr-xr-x 2 redis redis 4096 2011-03-02 00:11 . drwxr-xr-x 29 root root 4096 2011-01-24 15:58 .. -rw-r----- 1 redis redis 3923246988 2011-03-02 00:14 appendonly.aof -rw-rw---- 1 redis redis 32356467 2011-03-02 00:11 dump.rdb When I run BGREWRITEAOF, the AOF file shrinks, but disk space is not freed: $ ls -la total 95440 drwxr-xr-x 2 redis redis 4096 2011-03-02 00:17 . drwxr-xr-x 29 root root 4096 2011-01-24 15:58 .. -rw-rw---- 1 redis redis 65137639 2011-03-02 00:17 appendonly.aof -rw-rw---- 1 redis redis 32476167 2011-03-02 00:17 dump.rdb $ df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda 32G 24G 6.7G 78% / Sure enough, Redis is still holding the deleted file: $ sudo lsof -p6916 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ... redis-ser 6916 redis 7r REG 202,0 3923957317 918129 /var/lib/redis/appendonly.aof (deleted) ... redis-ser 6916 redis 10w REG 202,0 66952615 917507 /var/lib/redis/appendonly.aof ... How can I workaround this issue? I can restart Redis this time, but I would really like to avoid doing this on a regular basis. Note that I can not upgrade to 2.2 (upgrade to 2.0.4 is feasible though). More information on my system: $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 10.04.2 LTS Release: 10.04 Codename: lucid $ uname -a Linux my.box 2.6.32.16-linode28 #1 SMP Sun Jul 25 21:32:42 UTC 2010 i686 GNU/Linux $ redis-cli info redis_version:2.0.3 redis_git_sha1:00000000 redis_git_dirty:0 arch_bits:32 multiplexing_api:epoll process_id:6916 uptime_in_seconds:632728 uptime_in_days:7 connected_clients:2 connected_slaves:0 blocked_clients:0 used_memory:65714632 used_memory_human:62.67M changes_since_last_save:8398 bgsave_in_progress:0 last_save_time:1299014574 bgrewriteaof_in_progress:0 total_connections_received:17 total_commands_processed:55748609 expired_keys:0 hash_max_zipmap_entries:64 hash_max_zipmap_value:512 pubsub_channels:0 pubsub_patterns:0 vm_enabled:0 role:master db0:keys=1,expires=0 db1:keys=18,expires=0

    Read the article

  • SharePoint Records Center Submitted E-mail Records not picked up

    - by Kenneth Verburg
    We have set up a new SharePoint 2007 site with a Records Repository. We're using Exchange 2007 Managed Folders to route e-mails to this repository based on the 'label' attached to the e-mail as set in the Exchange 2007 journaling options. E-mails added to a Managed Folder get sent to SharePoint, they end up in the "Submitted E-mail Records" list of the Records Repository. That's according to plan, but the e-mails are not routed to the respective document library as defined by the label. Instead an error appears in the event viewer for every e-mail listed in the Submitted E-mail Records list, on every interval of the records repository schedule (set to every two minutes for testing purposes): Value cannot be null, parameter name: g. Sending a document from the SharePoint site iself to the Records Repository via the Send To... link works fine, but e-mails get stuck in the list... We have set Document Libraries in the Respository with and without content types (with matching names with the Label and the Record Routing rule set). Any ideas what could be wrong? This is in the event log: Every two minutes the following error appears in the Application Log: Source: Office SharePoint Server Category: Records Center Type: Error Event ID: 4975 User: N/A Computer: SPS2007 Description: Value cannot be null. Parameter name: g For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • ISA 2006 SP1 - SSL Client Certificate Authentication in Workgroup Environment

    - by JoshODBrown
    We have an IIS6 website that was previously published using an ISA 2006 SP1 standard server publishing rule. In IIS we had required a client certificate be provided before the website could be accessed... this all worked fine and dandy. Now we wish to use a web publishing rule on ISA 2006 SP1 for this same website. However, it seems the client certificate doesn't get processed now, so of course the user can't access the website. I've read a few articles stating the CA for the certificate needs to be installed in the trusted root certificate authorities store on the ISA Server (i have done this), as well as installing the client certificate on the ISA Server (done as well). I have also verified that the ISA Server is able to access the CRL for our CA no problem... In the listener properties for the web publishing rule, under Authentication, and Client Authentication Method, there is an option for SSL Client Certificate Authentication... i select this, but it appears the only Authentication Validation Method selectable is Windows (Active Directory).... there is no Active Directory in this environment. When i configure the rule with the defaults, I then try to hit my website and it prompts for my certificate, i choose it and hit ok... then I'm given the following error Error Code: 500 Internal Server Error. The server denied the specified Uniform Resource Locator (URL). Contact the server administrator. (12202) I check the event logs on the ISA Server and in Security Logs, i see Event ID 536, Failure Aud. The reason: The NetLogon component is not active. I think this is pretty obvious since there is no active directory available. Is there a way to make this web publishing rule work using client certificates in this workgroup environment? Any suggestions or links to helpful documents would be greatly appreciated!

    Read the article

  • Nice level not working on linux

    - by xioxox
    I have some highly floating point intensive processes doing very little I/O. One is called "xspec", which calculates a numerical model and returns a floating point result back to a master process every second (via stdout). It is niced at the 19 level. I have another simple process "cpufloattest" which just does numerical computations in a tight loop. It is not niced. I have a 4-core i7 system with hyperthreading disabled. I have started 4 of each type of process. Why is the Linux scheduler (Linux 3.4.2) not properly limiting the CPU time taken up by the niced processes? Cpu(s): 56.2%us, 1.0%sy, 41.8%ni, 0.0%id, 0.0%wa, 0.9%hi, 0.1%si, 0.0%st Mem: 12297620k total, 12147472k used, 150148k free, 831564k buffers Swap: 2104508k total, 71172k used, 2033336k free, 4753956k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 32399 jss 20 0 44728 32m 772 R 62.7 0.3 4:17.93 cpufloattest 32400 jss 20 0 44728 32m 744 R 53.1 0.3 4:14.17 cpufloattest 32402 jss 20 0 44728 32m 744 R 51.1 0.3 4:14.09 cpufloattest 32398 jss 20 0 44728 32m 744 R 48.8 0.3 4:15.44 cpufloattest 3989 jss 39 19 1725m 690m 7744 R 44.1 5.8 1459:59 xspec 3981 jss 39 19 1725m 689m 7744 R 42.1 5.7 1459:34 xspec 3985 jss 39 19 1725m 689m 7744 R 42.1 5.7 1460:51 xspec 3993 jss 39 19 1725m 691m 7744 R 38.8 5.8 1458:24 xspec The scheduler does what I expect if I start 8 of the cpufloattest processes, with 4 of them niced (i.e. 4 with most of the CPU, and 4 with very little)

    Read the article

  • How to transfer files via infrared on Linux?

    - by arielnmz
    I know this is a way too old technology but I've got some files inside a very old cellphone that I need to transfer to a very old computer. So far my Infrared USB device works well, it's detected by the machine (lsusb output): Bus 002 Device 002: ID 0df7:0620 Mobile Action Technology, Inc. MA-620 Infrared Adapter I've tried to send the file over MMS and even email (it lacks bluetooth, not to mention USB). But this cellphones's firmware doesn't let me attach the files. The file was originally transfered via IrDA, and it only has an internal memory (a whole 2 million bytes! whoa!). I found a package called irda-utils, but it seems that there are only two executables: irdaping and irdadump. I think the dump utility might do the job (which as far as I can see it's kind of a version of tcpdump but for IrDA), but I don't even know how to process the received frames. Could this question may be what I'm looking for? EDIT While reading through the Linux Infrared HOWTO I found about the OpenObex project, which may be what I'm looking for...

    Read the article

  • 1GB cached memory - Do I need more RAM?

    - by Martin
    The server runs well but I wonder if I should get more RAM. I only have a few MB of "free" memory and 1.2GB of "cached" memory: free: total used free shared buffers cached Mem: 3945 3893 51 0 28 1216 -/+ buffers/cache: 2648 1296 Swap: 3895 857 3038 I learned that cached memory is used while it's free and not. Is the cached value an indicator for the need of more RAM? cat /proc/meminfo 1 day after flushing the cache: MemTotal: 4040048 kB MemFree: 32844 kB Buffers: 18956 kB Cached: 1249092 kB SwapCached: 161576 kB Active: 3611328 kB Inactive: 189104 kB SwapTotal: 3989496 kB SwapFree: 2894200 kB Dirty: 20520 kB Writeback: 0 kB AnonPages: 2523496 kB Mapped: 217744 kB Slab: 70940 kB SReclaimable: 36756 kB SUnreclaim: 34184 kB PageTables: 99648 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 6009520 kB Committed_AS: 6401716 kB VmallocTotal: 34359738367 kB VmallocUsed: 18852 kB VmallocChunk: 34359719439 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB top: top - 17:20:10 up 112 days, 3:06, 1 user, load average: 1.01, 1.62, 1.48 Tasks: 208 total, 1 running, 207 sleeping, 0 stopped, 0 zombie Cpu(s): 0.6%us, 0.6%sy, 0.0%ni, 97.5%id, 1.3%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 4040048k total, 3953108k used, 86940k free, 16348k buffers Swap: 3989496k total, 1095712k used, 2893784k free, 1235436k cached

    Read the article

  • amavisd + postfix + dovecot blocks gif images

    - by David W
    I occasionally have a client who tries to email me and says his email gets blocked by my server. When I check the logs, I see this: Sep 6 18:12:52 myers amavis[15197]: (15197-08) p.path BANNED:1 [email protected]: "P=p003,L=1,M=multipart/mixed | P=p002,L=1/2,M=application/ms-tnef,T=tnef,N=winmail.dat | P=p004,L=1/2/1,T=image,T=gif,N=image001.gif,N=image001.gif", matching_key="(?-xism:^\\.(exe|lha|tnef|cab|dll)$)" And then a little later... Sep 6 18:12:58 myers amavis[15197]: (15197-08) Blocked BANNED (.image,.gif,image001.gif,image001.gif), [213.199.154.205] [157.56.236.229] <[email protected]> - > <[email protected]>, quarantine: banned-g4QhZGvwJvDF, Message-ID <6A9596BE385EC1499F83E464FA9ECCA20C668320@BY2PRD0611MB417.namprd06.prod.outlook.com>, mail_id: g4QhZGvwJvDF, Hits: -, size: 20916, 8439 ms From this and the bounce that he forwards me (to a different address I give him), I determine that its bouncing because of the file in his signature (image001.gif). However, that does NOT match the "key" in this part of the log: matching_key="(?-xism:^\\.(exe|lha|tnef|cab|dll)$)" Furthermore, the .gif extension is nowhere to be found in the /etc/amavisd.conf file (i.e. I'm not blocking emails because they contain .gif images). Am I missing something here? This is strange... and annoying.

    Read the article

  • How can I fix Office Sharepoint Search service?

    - by unknown (yahoo)
    For some reason, in operations server services, Office Sharepoint Search Service is MISSING! Which means I cant get Shared services working, which in turn I cannot get, and I do not think I have ever got Usage and reporting to see who is visiting my website, counter and with what OS/ browser ETC. I dont think I have ever seen this work in the 2 years trying with sharepoint. So in essense I have two problems, most important SEARCH, second usage reports. When trying to search, all i get is unknown error. Nothing is in my event viewer at all. I have tried http://www.cjvandyk.com/blog/Lists/Posts/Post.aspx?ID=96 , in the comments on that site, on other person reports search is missing entirely and is going to uninstall. Im tired of uninstall sharepoint and resinstalling to fix an odd off issue. I have things setup with Team foundation server that took forever to get work and reinstallation is not my solution. As for usage reporting, this is what microsoft responds to the " Both Windows SharePoint Services Usage logging and Office SharePoint Usage Processing must be enabled to view usage reports. Please contact your administrator to ensure that these services are enabled. " error I cannot do all the steps since SSP needs to be setup which i cant above.

    Read the article

  • GlusterFS is failing to mount on boot

    - by J. Pablo Fernández
    I'm running the official GlusterFS 3.5 packages on Ubuntu 12.04 and everything seems to be working fine, except mounting the GlusterFS volumes at boot time. This is what I see in the log files: [2014-06-13 08:52:28.139382] I [glusterfsd.c:1959:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.5.0 (/usr/sbin/glusterfs --volfile-server=koraga --volfile-id=/private_uploads /var/www/shared/private/uploads) [2014-06-13 08:52:28.147186] I [socket.c:3561:socket_init] 0-glusterfs: SSL support is NOT enabled [2014-06-13 08:52:28.147237] I [socket.c:3576:socket_init] 0-glusterfs: using system polling thread [2014-06-13 08:52:28.148183] E [socket.c:2161:socket_connect_finish] 0-glusterfs: connection to 176.58.113.205:24007 failed (Connection refused) [2014-06-13 08:52:28.148236] E [glusterfsd-mgmt.c:1601:mgmt_rpc_notify] 0-glusterfsd-mgmt: failed to connect with remote-host: koraga (No data available) [2014-06-13 08:52:28.148251] I [glusterfsd-mgmt.c:1607:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted all volfile servers [2014-06-13 08:52:28.148477] W [glusterfsd.c:1095:cleanup_and_exit] (-->/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_transport_notify+0x27) [0x7fe077f8e0f7] (-->/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_notify+0x1a4) [0x7fe077f91cc4] (-->/usr/sbin/glusterfs(+0xcada) [0x7fe078655ada]))) 0-: received signum (1), shutting down [2014-06-13 08:52:28.148513] I [fuse-bridge.c:5444:fini] 0-fuse: Unmounting '/var/www/shared/private/uploads'. My fstab contains: proc /proc proc defaults 0 0 /dev/xvda / ext4 noatime,errors=remount-ro 0 1 /dev/xvdb none swap sw 0 0 /dev/xvdc /var/lib/glusterfs/brick01 ext4 defaults 1 2 koraga:/private_uploads /var/www/shared/private/uploads glusterfs defaults,_netdev 0 0 Any ideas what's going on and/or how to fix it?

    Read the article

  • Email test deferred (mail transport unavailable) with ClamAV

    - by dirt
    I'm trying to set up a simple new mail server; when I send a test email to the server the email is getting hung up during delivery (user mapping is found) and the email is never found in /home/user/Maildir/new Here is my maillog after a fresh reboot and test email, there are a few warnings I am unfamiliar with. Can you please point me in the right direction? Oct 25 14:54:57 loki dovecot: master: Dovecot v2.0.9 starting up (core dumps disabled) Oct 25 14:54:58 loki postfix/postfix-script[1369]: starting the Postfix mail system Oct 25 14:54:58 loki postfix/master[1370]: daemon started -- version 2.6.6, configuration /etc/postfix Oct 25 14:56:00 loki postfix/tlsmgr[1457]: warning: request to update table btree:/etc/postfix/smtpd_scache in non-postfix directory /etc/postfix Oct 25 14:56:00 loki postfix/tlsmgr[1457]: warning: redirecting the request to postfix-owned data_directory /var/lib/postfix Oct 25 14:56:00 loki postfix/smtpd[1455]: connect from mail-ob0-f180.google.com[209.85.214.180] Oct 25 14:56:01 loki postfix/smtpd[1455]: 1CF5E20A8B: client=mail-ob0-f180.google.com[209.85.214.180] Oct 25 14:56:01 loki postfix/cleanup[1461]: 1CF5E20A8B: message-id= Oct 25 14:56:01 loki postfix/qmgr[1379]: 1CF5E20A8B: from=, size=1788, nrcpt=1 (queue active) Oct 25 14:56:01 loki postfix/qmgr[1379]: warning: connect to transport private/scan: No such file or directory Oct 25 14:56:01 loki postfix/error[1462]: 1CF5E20A8B: to=, orig_to=, relay=none, delay=0.18, delays=0.15/0.02/0/0.01, dsn=4.3.0, status=deferred (mail transport unavailable) Oct 25 14:56:01 loki postfix/smtpd[1455]: disconnect from mail-ob0-f180.google.com[209.85.214.180] master.cf snippets: # ========================================================================== # service type private unpriv chroot wakeup maxproc command + args # (yes) (yes) (yes) (never) (100) # ========================================================================== smtp inet n - n - - smtpd submission inet n - n - - smtpd -o smtpd_tls_security_level=encrypt # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING smtps inet n - n - - smtpd -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING scan unix - - n - 16 smtp -o smtp_data_done_timeout=1200 -o smtp_send_xforward_command=yes -o disable_dns_lookups=yes 127.0.0.1:10026 inet n - n - 16 smtpd -o content_filter= -o local_recipient_maps= -o relay_recipient_maps= -o smtpd_restriction_classes= -o smtpd_client_restrictions= -o smtpd_helo_restrictions= -o smtpd_sender_restrictions= -o smtpd_recipient_restrictions=permit_mynetworks,reject -o mynetworks_style=host -o smtpd_authorized_xforward_hosts=127.0.0.0/8

    Read the article

  • Emails going to Junk for Hotmail recipients

    - by David George
    We send daily mass emails to our customers (~30,000+ emails per day). We have problems with Hotmail users receiving our emails. Sometimes the email goes to the Junk folder, but often it will got to their inbox, but the content is blocked so the user sees a message saying "This email was blocked and may be dangerous". If an email is sent to GMAIL it is usually not blocked, but it does show up as from "Uknown" instead of the company. Please be advised I've done the following: 1. No RBLs Checked on - http://multirbl.valli.org/ 2. We do have SPF records published 3. We do have reverse DNS setup 4. Our company even signed up for the Junk Mail Reports Program at Hotmail Here is a sample header, I've noticed the X-SID-Result and the X-AUTH-Result both FAIL every time at Hotmail: X-Message-Delivery: Vj0xLjE7dXM9MDtsPTA7YT0wO0Q9MTtTQ0w9MQ== X-Message-Status: n:0 X-SID-Result: Fail X-AUTH-Result: FAIL X-Message-Info: JGTYoYF78jFqAaC29fBlDlD/ZI36+S6WoFmkQN10UxWFe1xLHhP+rDthGRZM87uHYM926hUBS+s0q46Yx9y6jdurhN6fx0bK Received: from privatecompany.com ([WanIPAddress]) by col0-mc3-f30.Col0.hotmail.com with Microsoft SMTPSVC(6.0.3790.4675); Wed, 5 May 2010 08:41:27 -0700 X-AuditID: ac10fe93-000013bc00000534-46-4be191a1618e Received: from INTERNAL-Email-SERVER([InternalIPAddress]) by privatecompany.com with Microsoft SMTPSVC(6.0.3790.4675); Wed, 5 May 2010 11:41:21 -0400 From: Private Company, Inc.<[email protected]> To: [email protected] Message-Id: <[email protected]> Subject: Date: Wed, 5 May 2010 11:42:46 -0400 MIME-Version: 1.0 Reply-To: [email protected] Content-Type: text/plain; charset="ISO-8859-1" Content-Transfer-Encoding: 8bit X-Brightmail-Tracker: AAAAAA== Return-Path: [email protected] X-OriginalArrivalTime: 05 May 2010 15:41:27.0837 (UTC) FILETIME=[6D06E4D0:01CAEC69]

    Read the article

  • Find out which task is generating a lot of context switches on linux

    - by Gaks
    According to vmstat, my Linux server (2xCore2 Duo 2.5 GHz) is constantly doing around 20k context switches per second. # vmstat 3 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 2 0 7292 249472 82340 2291972 0 0 0 0 0 0 7 13 79 0 0 0 7292 251808 82344 2291968 0 0 0 184 24 20090 1 1 99 0 0 0 7292 251876 82344 2291968 0 0 0 83 17 20157 1 0 99 0 0 0 7292 251876 82344 2291968 0 0 0 73 12 20116 1 0 99 0 ... but uptime shows small load: load average: 0.01, 0.02, 0.01 and top doesn't show any process with high %CPU usage. How do I find out what exactly is generating those context switches? Which process/thread? I tried to analyze pidstat output: # pidstat -w 10 1 12:39:13 PID cswch/s nvcswch/s Command 12:39:23 1 0.20 0.00 init 12:39:23 4 0.20 0.00 ksoftirqd/0 12:39:23 7 1.60 0.00 events/0 12:39:23 8 1.50 0.00 events/1 12:39:23 89 0.50 0.00 kblockd/0 12:39:23 90 0.30 0.00 kblockd/1 12:39:23 995 0.40 0.00 kirqd 12:39:23 997 0.60 0.00 kjournald 12:39:23 1146 0.20 0.00 svscan 12:39:23 2162 5.00 0.00 kjournald 12:39:23 2526 0.20 2.00 postgres 12:39:23 2530 1.00 0.30 postgres 12:39:23 2534 5.00 3.20 postgres 12:39:23 2536 1.40 1.70 postgres 12:39:23 12061 10.59 0.90 postgres 12:39:23 14442 1.50 2.20 postgres 12:39:23 15416 0.20 0.00 monitor 12:39:23 17289 0.10 0.00 syslogd 12:39:23 21776 0.40 0.30 postgres 12:39:23 23638 0.10 0.00 screen 12:39:23 25153 1.00 0.00 sshd 12:39:23 25185 86.61 0.00 daemon1 12:39:23 25190 12.19 35.86 postgres 12:39:23 25295 2.00 0.00 screen 12:39:23 25743 9.99 0.00 daemon2 12:39:23 25747 1.10 3.00 postgres 12:39:23 26968 5.09 0.80 postgres 12:39:23 26969 5.00 0.00 postgres 12:39:23 26970 1.10 0.20 postgres 12:39:23 26971 17.98 1.80 postgres 12:39:23 27607 0.90 0.40 postgres 12:39:23 29338 4.30 0.00 screen 12:39:23 31247 4.10 23.58 postgres 12:39:23 31249 82.92 34.77 postgres 12:39:23 31484 0.20 0.00 pdflush 12:39:23 32097 0.10 0.00 pidstat Looks like some postgresql tasks are doing 10 context swiches per second, but it doesn't all sum up to 20k anyway. Any idea how to dig a little deeper for an answer?

    Read the article

  • WS 2008 R2 giving "Internal Server Error"

    - by dragon112
    I have had this problem for a while now and can't find the problem at all. When i open a page it will sometimes give a 500 Internal Server Error message. This hapens on a website that works perfectly but when i try to upload anything it will give this message(all php settings have been set to either 1gb or 3000 seconds as well as the iis headers). Also when i open a simple page which does nothing more than include another php page and include a couple of classes the error will occur. I have no idea what causes this error and would love to hear from any of you on what this could be. I checked the server logs and for the upload issue i found this error: The description for Event ID 1 from source named cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer. If the event originated on another computer, the display information had to be saved with the event. The following information was included with the event: managed-keys-zone ./IN: loading from master file managed-keys.bind failed: file not found the message resource is present but the message is not found in the string/message table Regards, Dragon

    Read the article

  • Drive Mapped On Amazon EC2 Instance Startup Disconnected After Logon

    - by jsn
    I am launching an Amazon EC2 instance (Windows Server 2012), and on startup (though User Data field), it runs this powershell script: <powershell> NET CONFIG SERVER /AUTODISCONNECT:-1 # clear all prior connections net use * /delete /y > C:\delLog.txt 2>&1 # mount new drive net use R: \\dbHost\share /user:username pass /persistent:yes > C:\useLog.txt 2>&1 ipconfig /all > C:\ipLog.txt </powershell> When it launches and I connect to it through RDP, it shows in explorer "Diconnected Network Drive (R:). If I double click it, error message displays "R:\ is not accesible. Access id denied". Normally, it would ask me for credentials to reconnect. I need for this drive to be connected through the duration of the instance. delLog.txt contents: You have these remote connections: T: \\dbHost\share Continuing will cancel the connections. The command completed successfully. useLog.txt contents: The command completed successfully. ipLog.txt contents as expected. The net use commands works fine by itself, it connects. Anyone have any idea what could be wrong? There is only one account on these machines - Administrator. It is a samba share to a Linux server on a private network.

    Read the article

  • Connect over WiFi to SQL Server from another computer

    - by Bronzato
    I tried to connect over WiFi to SQL Server with SQL Server Management Studio from another computer, but it failed. I have a computer with Windows 7 & SQL Server 2008 (lets say the server computer). Next to it I have a freshly installed computer with Windows 7 & SQL Server Management Studio (let's say the client computer). What I did on the server computer: Configure firewall by enabling port 1433 Enabled network protocols (TCP/IP) inside SQL Server Configuration Manager Checked Allow remote connections to this server in server properties in the SQL Server Management application. Started SQL Server Browser Restarted services (SQL Server Browser is stopped at this point, but I don't think it is necessary. Is it?) Next, I successfully tested a ping on the port 1433 from my client computer with a tool named tcping (ex: tcping 192.168.1.4 1433). But I still cannot connect from my client computer to SQL Server on my server computer. Ok, something new with this problem: Until now, I successfully connected to my "server computer" with Management Studio. What I did is type the computer name in the server name field in the connection window of Management Studio. My previous (failed) attempt was to type the computer name followed by the instance of SQL server (ex: COMPUTER_NAME\SQL2008). I don't know why I only have to type the computer name. Now my new challenge is to be successful in connecting my VB6 application to this remote database located on my "server computer". I have a connection string for this but it failed to connect. Here is my connection string: "Provider=SQLOLEDB.1;Password=mypassword;User ID=sa;Initial Catalog=TPB;Data Source=THIERRY-HP\SQL2008" Any idea what's going wrong?

    Read the article

  • How to prevent getting infected by rogue security applications

    - by Ieyasu Sawada
    My computer never got infected with a virus before, because I'm using Web of Trust browser plugin, sandboxie and Avast Free antivirus. But today, it got infected with a rogue security application called antivirus.net. I have already removed it using MBAM, SAS, and Kaspersky Virus Removal Tool. And by the way, I was using MSE when my laptop got infected. Seems like the rogue application just killed off the MSE process. And I never even got a warning. I was using the wi-fi from our school, which I think is the cause since most of the computers in our laboratory has rogue applications on it. My question is, how do I prevent this from happening again? It took me about 6 hours to disinfect my computer and I don't want it to happen again. Please enlighten me if these rogue applications really just pop out of nowhere. Note I'm not dumb enough to agree with installing rogue security applications. It just came out of nowhere. I'm happy with MSE, well not after it let antivirus.net penetrate my computer. I've done a little bit of research and it says that it needs the permission of the user to actually install it in the computer: http://www.net-security.org/malware_news.php?id=1245 http://en.wikipedia.org/wiki/Rogue_security_software Is it possible that other computers in our school network have agreed to install those? Or maybe the network admin?

    Read the article

< Previous Page | 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773  | Next Page >