Search Results

Search found 33445 results on 1338 pages for 'single instance storage'.

Page 676/1338 | < Previous Page | 672 673 674 675 676 677 678 679 680 681 682 683  | Next Page >

  • NFS on top of GFS2 - does it work?

    - by Matthew
    We're currently using a NoSQL derivative called Splunk to receive our data. The software supports something called "search head pooling" in which the job-dispatching engine is housed on several servers which share a common storage point. Originally our intention was to use a clustered filesystem like GFS2 because of low latency, stability, and ease of setup. We set up GFS2, and it's working with no issues. However when trying to run the software, it's trying to create lock files, and a bunch of other things that their support team can't quite explain. Ultimate feedback from them was that they only support NFS. Our network administration team heavily frowns on NFS (lack of stability, file lock issues, etc). So, I was thinking about the possibility of setting up NFS on each server in the cluster to act as a wedge layer between the GFS2 filesystem and the software. Basically configure each server to export the GFS2 filesystem's mountpoint via NFS, and then tell each server to connect to that NFS share. That way we aren't introducing any single-points-of-failure should a dedicated NFS server go down, but the vendor gets their "required" NFS share. I'm just brainstorming ways around, so please tear this apart :)

    Read the article

  • USB Device Not Recognized (Mac)

    - by Nargis
    Fortunately, my Mac-pro also made one of my USB storage devices inoperable. My data loss in that USB device but such as another USB device and USB keyboard are unaffected. I have heard that my friend usually trigger this problem by having at least two devices plugged in - typically thumb drives/USB flash drives, and then once a second flash drive is plugged in that become unrecognized. I have only two USB ports and first I think port loose when I connect two USB devices. But later I found these hidden files (“.Spotlight-V100”, “.TemporaryItems”, “.Trashes”, and “._.Trashes”) are created by Mac OS. And before unrecognized that USB device I have deleted these files and my friend had also done the same action. Now I don’t want to test for next USB device to become unrecognized and I won’t deleted any hidden system file inside the flash drives. But I really want to know why these problems happened. Can I delete these hidden files when I only connect to virtual machine (Vista), because I used to delete all useless hidden files from USB flash drives? Any suggestions or thoughts to prevent this or alternative suggestions to fix the problem that take lossless would be much appreciated.

    Read the article

  • How to optimize a postgreSQL server for a "write once, read many"-type infrastructure ?

    - by mhu
    Greetings, I am working on a piece of software that logs entries (and related tagging) in a PostgreSQL database for storage and retrieval. We never update any data once it has been inserted; we might remove it when the entry gets too old, but this is done at most once a day. Stored entries can be retrieved by users. The insertion of new entries can happen rather fast and regularly, thus the database will commonly hold several millions elements. The tables used are pretty simple : one table for ids, raw content and insertion date; and one table storing tags and their values associated to an id. User search mostly concern tags values, so SELECTs usually consist of JOIN queries on ids on the two tables. To sum it up : 2 tables Lots of INSERT no UPDATE some DELETE, once a day at most some user-generated SELECT with JOIN huge data set What would an optimal server configuration (software and hardware, I assume for example that RAID10 could help) be for my PostgreSQL server, given these requirements ? By optimal, I mean one that allows SELECT queries taking a reasonably little amount of time. I can provide more information about the current setup (like tables, indexes ...) if needed.

    Read the article

  • Software for RAID Failure Alerts?

    - by QF_Developer
    I have two 256 GB Samsung 840 Pro SSD disks in a RAID 1 array. I would like to receive a notification if one of the disks in the array fails. Can anybody recommend an application I can install on the server to fire an email if such an event occurs? Here are some additional specs: Supermicro X9SCM-IIF motherboard utilising the hardware RAID controller. OS = Windows 2012 Standard Also is it possible to simulate a disk failure by pulling it out of the bay? SSDs appear to fail close together when in a mirrored config so I'd like to know ASAP if one goes down so I can swap them out with minimum delay. UPDATE 26th June 2013 ------------------------ None of the software that ships with the Supermicro X9SCM-* motherboards offer support for RAID monitoring. As has been pointed out here, these boards are built on an Intel chipset for RAID and so I installed Intel Rapid Storage Technology that supports automated email notifications on RAID failure http://www.intel.com/support/chipsets/imsm/sb/cs-020784.htm One small issue, the software only allows you to send email notifications without SMTP authentication. There's a bunch of different workarounds here: http://communities.intel.com/thread/30771

    Read the article

  • Public Folders - Delete Public Folders from 2003 after migrating to 2010 (via Adsiedit) - safe?

    - by HeavenCore
    Similar Question: How do I delete a public store in Exchange 2003? We are ready to remove our Exchange 2003 server after having migrated all public folders and mailboxes to 2010. We ran for a week with the exchange 2003 server shutdown and everything seemed to work. When I try to delete the PF database from 2003 it says it contains replicas. Whilst migrating i only had one was sync working (from 2003 to 2010) so i believe that 2003 hasn't received the responses from 2010 saying replica removed. When I look in Public folders on the 2003 box none are listed, when i look in PF Instances they are all listed. I know everything has moved to the 2010 server and I know 2010 is not showing the 2003 server as a replica for any folders. I am looking to use ADSI edit to remove the Public folder database from the 2003 server, but want to ensure i am going to delete the right thing so that they do not get deleted from the 2010 database. Should i delete configuration, Services, Microsoft Exchange, Company Name, Administrative groups, First administrative group, Servers, Server name, Information store, First storage group, public folder store (Server name)? Or something else? I have checked and the only public folder with the old exchange server listed as a replica is SYSTEM CONFIGURATION. Thanks in advance.

    Read the article

  • Something like Dropbox for local use

    - by Casper
    I am looking for a solution to sync folder pairs between a NAS and multiple local macs. Each of the macs could edit files and the other macs should then get synced automatically. Basically my own local version of Dropbox without using "cloud-storage". I have looked into solutions using rsync. As I understand it rsync is not really capable of doing a bi-directional sync. I also do not want to necessarily invoke the sync process. I would prefer a daemon running in the background - waiting and checking for changes and then syncing them "live". The program should also be flexible enough to recognize that it sometimes (in the case with laptops) can not reach the NAS. It should then just wait for the connection to be back again, without bugging me ever few minutes. I have looked into synk, folderwatch, rsync and a few others, but I haven't really found a solution. Isn't there something like "offline folders" from microsoft for the mac? Thanks PS: just for clarification - I don't want to sync for backup purposes, instead I am wanting to sync so that all macs have a local copy of the most recent changes to files.

    Read the article

  • php-cgi.exe Taking out server, multiple running

    - by Alex
    I have been using ZendServer CE for over a year and have never had a problem. Recently, about a week or two ago I have found my server to be acting up and even causing RDP to be un-connectable. After some looking around I have 20, 25, 30+ php-cgi.exe running. With my IIS7 service starting with Windows once my server started all these php-cgi.exe would start running (even though the limit is 10) and I could not even connect to it. After disabling the Web Server as startup which stops php-cgi.exe from running the server runs flawless, like it always has. As soon as I run the web server all these odd issues start. I have a post over at Zend http://forums.zend.com/viewtopic.php?f=44&t=41043&p=95133 where I was told to update my Zend install. After doing so this issue has not gone away. Even running 1 php-cgi.exe (somehow 2 start anyway) the server begins to go silly. The first issue I find myself with running php-cgi.exe is that Windows Services, weather be stock or using FireDaemon begin to lag, slow start, crash, etc. If anyone can help me with this I would GREATLY appreciate it. At this time I am forced to look for a alternative to running PHP other than cgi as it simply takes out the whole box. On another note, I run this same version of Zend on a similar server with no issues. Starting to think its a IIS issue. (UPDATE) Installed newest version of PHP, separate from Zend, same issue. Server Specs: Intel Xeon Quad w HT Nehlam Based 24GB DDR3 1333 2x1TB Raid Mirror OS 2x1TB Raid Mirror (Other) 4x2TB Raid 5 (Storage) Server 2008 R2

    Read the article

  • PXELinux and compressed kernels/images

    - by Yvan JANSSENS
    Is it possible to boot compressed kernels with a compressed initrd with PXELinux? First, a little background: We created a custom Linux distro, for diskless OpenCL computing nodes. We want those nodes to fetch their OS from the network. Our Distro is composed out of a kernel (duh) and a large initrd which is loaded into RAM and everything is executed from there. We chose to run everything off the initrd for two reasons: NFS was not an option to serve the filesystem's extra contents Fast file access from RAM. No persistent storage needed, data and config is pulled dynamically through a SOAP service. Now our initrd is about 450M in size. At our network speeds, it takes about two to three minutes to load a single client. Will compression speed up te downloading, and if yes, which one should be used? Is LZMA supported by PXELinux, or do we need to stick to bzip2 or gzip? Because of the 2-3 minutes loading time, booting 15 nodes over the same network link takes quite a lot of time. We decided not to use hard drives or CD/DVD drives, for financial reasons (cheapest HDD @ €30 times 15 is a lot of money saved ;-) ) So, our question is: what compression options are available for this setup? And how do we do this? Thank you for your time! Yvan Janssens

    Read the article

  • Nexenta, NFS and LOCK_EX

    - by Givre
    I'm currently using an LAMP architecture and I expect a big problem :( I have several http web server using PHP5. All are mounting via NFS (v3) the directory for all the hosted websites. The file server is running the Nexenta Storage Appliance using ZFS . The problem is all the NFS client trying to write something in a file over the NFS get this problem : This is inside the apache2 process: open("/nfs/website1/file.txt", ORDWR|OCREAT, 0600) = 11647 fstat(11647, {stmode=SIFREG|0600, st_size=23754, ...}) = 0 flock(11647, LOCK_EX And the process never get the LOCK and keep waiting for... always. The effect? All the apache2 procces get used and waiting.. my severs can't still proccess the others requests because there is no more proccess available. I don't now where to find a solution.. for me it.'s on the NFS server side.. but wich configuration is wrong or missing ? How can I find what is wrong? If you need more information about the configuration, just ask me what can help you more :)

    Read the article

  • Why are all of my ZFS snapshot directories empty?

    - by growse
    I'm running an Oracle 11 box as a ZFS storage appliance, and I'm taking regular snapshots of the ZFS filesystems, via cron. In the past, I know that if I wanted to grab a particular file from a snapshot, a read-only copy was kept in .zfs/snapshot/{name}/ and I could just navigate there and pull the file out. This is documented on Oracle's website. However, I went to do this the other day, and noticed that the ZFS directories within the snapshot directories are all empty. zfs list -t snapshot correctly shows the list of snapshots that should be present, and .zfs/snapshots correctly contains a directory for each snapshot, and in each snapshot there is a directory present for each ZFS filesystem. However, these directories appear to be empty. I just tested a restore by touching a file in a little-used share and rolling back to the latest hourly snapshot, and this appears to have worked fine. So the rollback functionality is there. Did Oracle change how snapshots are done? Or is something seriously wrong here?

    Read the article

  • Why am I unable to mount my USB drive (unknown partition table)?

    - by Pat
    I'm a real newbie to linux. Anyway the problem is that my USB doesn't get recognized anymore which is really annoying because I need information from it. I've read like a zillion threads how to manually mount it but I really can't it to work. I hope it's just some easy, stupid problem where any of you could help me out quickly.. Here is the syslog: kernel: [ 6872.420125] usb 2-2: new high-speed USB device number 11 using ehci_hcd mtp-probe: checking bus 2, device 11: "/sys/devices/pci0000:00/0000:00:1d.7/usb2/2-2" kernel: [ 6872.556295] scsi8 : usb-storage 2-2:1.0 mtp-probe: bus: 2, device: 11 was not an MTP device kernel: [ 6873.558081] scsi 8:0:0:0: Direct-Access SanDisk Cruzer 8.01 PQ: 0 ANSI: 0 CCS kernel: [ 6873.559964] sd 8:0:0:0: Attached scsi generic sg3 type 0 kernel: [ 6873.562833] sd 8:0:0:0: [sdc] 15682559 512-byte logical blocks: (8.02 GB/7.47 GiB) kernel: [ 6873.564867] sd 8:0:0:0: [sdc] Write Protect is off kernel: [ 6873.564878] sd 8:0:0:0: [sdc] Mode Sense: 45 00 00 08 kernel: [ 6873.565485] sd 8:0:0:0: [sdc] No Caching mode page present kernel: [ 6873.565495] sd 8:0:0:0: [sdc] Assuming drive cache: write through kernel: [ 6873.568377] sd 8:0:0:0: [sdc] No Caching mode page present kernel: [ 6873.568387] sd 8:0:0:0: [sdc] Assuming drive cache: write through kernel: [ 6873.574330] sdc: unknown partition table kernel: [ 6873.576853] sd 8:0:0:0: [sdc] No Caching mode page present kernel: [ 6873.576863] sd 8:0:0:0: [sdc] Assuming drive cache: write through kernel: [ 6873.576871] sd 8:0:0:0: [sdc] Attached SCSI removable disk Thanks in advance

    Read the article

  • Apache Getting Bogged Down By Certain Script (Wp-Cron.php) - How To Kill Process Automatically

    - by user50037
    I have a server that is running a number of wordpress blogs, and a number of them have several hundred/thousand posts. Every couple of days, the server slows to a crawl due to a file being run on Wordpress called WP-cron.php. My entire apache process log turns into this : http:// imgur.com/A7K9k.png Times that by quite a bit. And server no go. Each process takes up about 1.1% of ram. And when we have 50 of them on the go. It gets insane. Not all of them are coming from the same blog, they are pretty widespread. In the Apache process page of WHM, they are usually ALL set to the status of "C", which means closing. But they can sit there until they crash the server, and they still hold the memory. Just google "wp-cron.php load" and you will find plenty of people with similar issues. In anycase, we have think it is down to users adding a tonne of dead "pinglists" to their wordpress installation. Which in turn wordpress loops through them endlessly. Problem number 1. Does anyone have any other suggestions about what would cause the Wordpress file wp-cron.php to loop endlessly. I still think it is down to pings, because all of the people we have contacted about their account load going sky high, have had massive ping lists. Problem number 2. Even if it is down to excessive pinglists in wordpress. We cannot be babying every single account on the server waiting for it to start spawning the wp-cron processes. It often happens overnight, and I start getting SMS alerts at 2am about the load. I have CSF installed, which apparently would have ended the processes if they ran over XXX time. But I have been told that it won't catch the processes because they end up in this state of "closing" (They show up as "C" on the Apache page of WHM). Apparently CSF will only kill processes that are "running" which C does not count. I have seen various other scripts such as : http://dltj.org/article/die-apache-die/ . I took a look at the stat of /proc. But I was boggled at which delimited part was the time running. And if there was any way I could connect it back to an actual Apache process, so that I could see what file was running (So only close connections connected to wp-cron.php, with a state of "C"). Overall I know Problem 2 glosses over the real reason. But I do put the whole thing to excessive pinglists in Wordpress. But I just cannot sit there and babysit every single installation 24/7. So I need a way to save the server when I am not available. Any help would be much appreciated.

    Read the article

  • Disk operations in windows 7 are slow

    - by Skadlig
    My computer started lagging last Sunday. I tried to reboot it and it failed. Trying to boot into failsafe mode takes around two hours. It mainly freezes on two files: scsiport.sys and classpnp.sys When it finally has started all disc operations are really slow. When it has run for a while it goes faster, probably due to data moved into RAM instead. It froze on an other file before that was associated with Avast but uninstalling it didn't really help. A critical windows update was installed on Sunday but rolling back the update didn't help. I had a guess about the sound card but disabling the sound card drivers also didn’t help. I have an inkling of an idea that it might be Intel rapid storage technology that might be acting up but it doesn't allow me to reinstall it from failsafe mode and I haven't been able to log into normal mode for a while. I would appreciate suggestions regarding how to get into normal mode again and/or what can be the root cause.

    Read the article

  • Is my "Generic" USB Flash Drive broken?

    - by Jesse J.
    So here is the situation. I find myself technological knowledgeable about many things ( I love to code, whether it's websites, C#, C++ or so on). However: My 2 toddlers (my wife actually) bought me a "Generic" 128 GB USB Storage Device (Usb Flash Drive) for Father's Day. I thought awesome at first..... WRONG! Nothing but problems with it. 3-4mb/s MAX transfer speed. I can bear with it. BUT! When I went to reformat my computer I transferred my save files from my games over to the stick and then the USB Stick managed to become corrupted. Not just a simple format would work either. It's screwed. I tried to use (Manually changed usb drive letter troubleshooting it to X) "chmod X: /X /F /R" with administrator rights, I did this after a long session to make it work with no errors (had to delete the log) and I finally recovered the files, however when I go to use it (transfer to or from) it transfers a couple kb to the stick or from it and then freezes, It says (Windows 7): Name: From: Folder (X:\File\Location) To: Folder C:\Users\Username\Desktop) Items Remaining: 0 (0 bytes) Speed: 0 bytes/second It does this forever... and ever... and ever... It transfered 3 files atleast, and then stopped. This is a new USB Stick bought from a "High" reputation company on eBay. Is the USB Stick screwed?

    Read the article

  • Mysql start fails with Operating System error 13

    - by curious
    I have XAMPP on my Ubuntu Lucid system and everything worked fine. But there seems to be some problem now and mysql wouldn't start. I had tried to recover a few Drupal databases and hence copied the raw files to /opt/lampp/var/mysql folder like all other database folders. And, I guess that could have caused the problem. I am pasting the last few lines of the error log. Someone please help me out. 100814 15:17:47 mysqld_safe Starting mysqld daemon with databases from /opt/lampp/var/mysql 100814 15:17:47 [Note] Plugin 'FEDERATED' is disabled. 100814 15:17:47 [ERROR] Can't open shared library 'libpbxt.so' (errno: 0 API version for STORAGE ENGINE plugin is too different) 100814 15:17:47 [Warning] Couldn't load plugin named 'PBXT' with soname 'libpbxt.so'. 100814 15:17:48 InnoDB: Operating system error number 13 in a file operation. InnoDB: The error means mysqld does not have the access rights to InnoDB: the directory. InnoDB: File name /opt/lampp/var/mysql/ibdata1 InnoDB: File operation call: 'open'. InnoDB: Cannot continue operation.

    Read the article

  • Moving from 1 Linux Partition to Many over USB Mount

    - by Mistiry
    We have devices which use Compact Flash for storage. They work OK, but we recently got industrial-grade CF cards to start using. One of the major problems we get is corruption on the flash card. As it is now, these flash cards run Debian with everything in a single partition. We want to have multiple partitions on the new industrial CF cards to help avoid some of the corruption problems. I booted up the device, and attached a USB CF reader. I then used fdisk to partition the CF card in the USB reader. How can I move the data to these partitions so that it works? I have a partition for each of these directories: /lib /var /root /boot /tmp /home /etc / swap space I imagine I can't just use rsync - do I need to attach a second CF reader with a copy of the CF card, so that it's not active and in-use - and then copy from the first reader to the second? How will the system know where to find its files? I know I'd have to change fstab, but that resides in /etc, which will be on a separate partition...how will it find the fstab file if it can't find /etc? And what about grub? I'm at a loss, perhaps its just because I'm under the weather, or I'm just missing a piece of logic here... Any help is greatly appreciated, this is somewhat urgent as our existing stock is nearing its end and we don't want to purchase anything but these industrial cards, but need to get it working with partitions.

    Read the article

  • How to backup virtual machines on a standalone ESXi host?

    - by Massimo
    Standalone ESXi (4.1) host without any vCenter Server. How to backup virtual machines as quickly and storage-friendly as possible? I know I can access the ESXi console and use the standard Unix cp command, but this has the downfall of copying the whole VMDK files, not only their actually used space; so, for a 30-GB VMDK of which only 1 GB is used, the backup would take 30 full GBs of space, and time accordingly. And yes, I know about thin-provisioned virtual disks, but they tend to behave very badly when physically copied, and/or to blow up to their full provisioned size; also, they are not recommended for actual VM performance. It is ok for me to shut down the VMs before backing them up (i.e. I don't need "live" backups); but I need a way to copy them around efficiently; and yes, a way to automate shutdown/startup when taking a backup would also help. I only have ESXi; no Service Console, no vCenter Server... what's the best way to handle this task? Also, what about restores?

    Read the article

  • Windows 7 Sharing issue on RAID 5 Array(s)

    - by K.A.I.N
    Greetings all, I'm having a very odd error with a windows 7 ultimate x64 system. The network system setup is as follows: 2x XP Pro 32 Bit machines 1x Vista ultimate x64 machine 2x Windows 7 x64 Ultimate machines all chained into 1x 16 port netgear prosafe gigabit switch, the windows 7 & vista machines are duplexed. Also there is a router (netgear Rangemax) chained off the switch I am basically using one of the windows 7 machines to host storage & stream media to other machines. To this end i have put 2x 3tb hardware RAID 5 arrays in it and assorted other spare disks which i have shared the roots of. The unusual problems start when i am getting Access denied, Please contact administrator for permission blah blah blah when trying to access both of the RAID 5 arrays but not the other stand alones. I have checked the permission settings, i have added everyone to the read permission for the root, i have tried moving things into sub directories then sharing them. I have tried various setting combinations in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa and always the same. I have tried flushing caches all round, disabling and re-enabling shares & sharing after restart as well as several other things & the result is always the same... No problem on individual drives but access denied on both the RAID arrays from both XP & Vista & Windows 7 machines. One interesting quirk that may lead to an answer is that there is no "offline status" information regarding the folders when you select the RAID 5s from a windows 7 machine yet there is on the normal drives which say they are online. It is as if the raid is present but turned off or spun down but as far as i was aware windows will spin an array back up on network request and on the machine itself the drives seem to be online and can be accessed. Have to admit this has me stumped. Any suggestions anyone? Thanks in advance for any fellow geek assistance. K.A.I.N

    Read the article

  • How to backup Servers to an SSH-Host with low traffic and access to versions and encryption?

    - by leto
    Hello, I've not run backups for the past dont't remember anymore years for my personal stuff until waking up lately and realising contrary to my prior belief: Actually. I care! :) Now I have a central data server at home where I want to attach an external media to, to which I want to save backups of my most important stuff, like years of self-written scripts, database dumps, you name it. I've tinkered with rsync+ssh over the last two years, also tried tar over ssh, but don't know the simplest and most easy to maintain way to do it yet. Heres my workload: A typical LAMP-Server (<5GB Data) which I'd like to backup fully so lots of small files connected via 10Mbit My personal stuff (<750GB Data) from a Mac connected via GE My passwords in an encrypted container (100Mb) from OpenBSD connected via serial-PPP My E-Mail from the last ten years (<25GB) as Maildir which I need to keep in readable format Some archives (tar.*) which I need to backup only once and keep in readable format (Deleted my ideas, as I'm here for suggestions) What I need: 1. Use an ssh-tunnel for data transfer 2. Be quick with lots of small files 3. Keep revisions 4. Be sure the data I save is not corrupted 5. Intelligent resume functions and be able to deal with network congestion :) 6. Compressed and optionally encrypted storage 7. Be able to extract data from backup easily (filesystem like usage would be nice) How would and with what software would you backup this stuff? Hints to tools that can help solve only part of my problem (like encryption) also greatly appreciated. Greets

    Read the article

  • 20 1TB drives vs. 10 2TB drives in RAID5/6 server

    - by Hunter
    Hi everyone, I will be setting up a server at work and I need some advice on some details. The setup will be one blade-type server (8-core, 16GB RAM) with two subsystems - one for the main storage the other to back it up. I'm shooting for a 20TB array (I know it'll be less after formatting and parity drives). So is there any advantage one way or the other with either 20 1TB drives or 10 2TB drives? I'm not sure right now how many controllers I should have either (in the quote I have is a dual-port controller). I would think two controllers for a server of this size would be a better choice than the dual-port controller (but I really don't know). And would an array of this size have any performance issues in RAID 5 or 6 (I know RAID 5 or 6 are "slower" because of all the parity calculations). Also, these will be either WD RE3 (1TB) or the RE4 (2TB). Oh, also, for the backup array would it be ok to use the WD 2TB green drives (also in RAID5 or 6)?

    Read the article

  • Bing Desktop not updating the wallpaper anymore

    - by warmth
    For some reason, first my workstation and then my tablet stopped updating the wallpaper. First I thought it was my company that was avoiding the app to work properly but then I started noticing that the app itself is a mess: It has two storage and formats for the wallpapers: C:\Users\<username>\AppData\Local\Microsoft\BingDesktop\en-US\Apps\Wallpaper_5386c77076d04cf9a8b5d619b4cba48e\VersionIndependent\images with a #####.jpg (single number) image format & C:\Users\<username>\AppData\Local\Microsoft\BingDesktop\themes with a ####-##-##.jpg (date) image format. I read here that deleting the themes folder it will get remade with the new images, and it worked. However those are not the files used by the Wallpaper app and deleting the images folder won't get the same result. I have added Bing Desktop to the Firewall white list and the issue is still there. Any ideas? Currently I'm using DisplayFusion to place the wallpaper manually because the company doesn't allow change the wallpapers (policies). Note: I wrote to the DisplayFusion developers to suggest adding a feature to support Bing Wallpapers. They told me there was no API support to implement it but they will study this possibility (workaround) for the future: http://stackoverflow.com/questions/10639914/is-there-a-way-to-get-bings-photo-of-the-day

    Read the article

  • linux kernel buffer memory is zero

    - by user64772
    Hi all. There are one qestion that i can`t find in google. I have many linux boxes mostly with SLES or openSUSE, diffrent versions and kernels. On some of them i faced with slow oracle transactions problem. It time to time problem and when i log in the box on that time i see that oracle blocked in kernel function sync_page # while :; do ps axo stat,pid,cmd,wchan | egrep '^D|^R'; echo --; sleep 5; done D 3483 hald-addon-storage: polling ide_do_drive_cmd Ds 4635 ora_dbw0_orcl sync_page Ds 4637 ora_lgwr_orcl sync_page Ds 4639 ora_ckpt_orcl sync_page D 11210 oracleorcl (LOCAL=NO) sync_page D 12457 [smtpd] sync_page R+ 12458 ps axo stat,pid,cmd,wchan - -- Ds 4635 ora_dbw0_orcl sync_page Ds 4637 ora_lgwr_orcl sync_page Ds 4639 ora_ckpt_orcl sync_page D 11210 oracleorcl (LOCAL=NO) sync_page R+ 12501 ps axo stat,pid,cmd,wchan - -- Ds 4635 ora_dbw0_orcl sync_page Ds 4637 ora_lgwr_orcl sync_page Ds 4639 ora_ckpt_orcl sync_page D 11210 oracleorcl (LOCAL=NO) sync_page R+ 12535 ps axo stat,pid,cmd,wchan - -- Ds 4635 ora_dbw0_orcl sync_page Ds 4637 ora_lgwr_orcl sync_page Ds 4639 ora_ckpt_orcl sync_page D 11210 oracleorcl (LOCAL=NO) sync_page R+ 12570 ps axo stat,pid,cmd,wchan - -- so i think that box is run out of memory for disk buffers but memry is fine total used free shared buffers cached Mem: 4149084 3994552 154532 0 0 2424328 -/+ buffers/cache: 1570224 2578860 Swap: 3148700 750696 2398004 i think that this is the problem, buffer is zero and we must write directly to disk, but why buffer is zero ? - i try to google it and find nothing - is anyone can help ?

    Read the article

  • Automatic switching of network card when vm is moved

    - by spock
    I have two hosts in a pool and I used to be able to move the vm around and they will start without any problem. But after I played around with some network setting, which I don't remember what, I started getting "This VM needs storage that cannot be seen from that server" message. As you can tell I am a beginner with Xenserver. Here is the very simple environment: 2 host servers with their own local hard disk and network card. One is a Pool master. Problem: Power off a vm and move vm from one server to another, or clone one vm to the other server. It used to be able to start up right away. Now, I need to delete one of the network that does not belong to the server, then it will start. Otherwise, the above error msg popup. The two networks (one for each network card in each host) are in the Networking tab of the vm, as well as in the host's networking tab. I googled but all I got to empty the DVD drive, which is not the problem here. Thanks in advance!

    Read the article

  • Incorrect Internal DNS Resolution

    - by user167016
    I'm having a DNS issue. Server 2008 R2. The first clue was that after being off the network for a month, I could no longer Remote Desktop into my workstation by name, it wouldn't find it. Both via VPN and internally. But if I connect using its IP, that works. Now I notice in the server's Share and Storage Management, in Manage Sessions, it's displaying the incorrect computer name for some users. So I try, for one example: Ping -a 192.168.16.81 Pinging BOBS_COMPUTER.ourdomain.local [192.168.16.81] with 32 bytes of data: - replies all successful Then I try Ping RICHARDS_COMPUTER Pinging RICHARDS_COMPUTER.ourdomain.local [192.168.16.81] with 32 bytes of data: -all replies successful In DHCP, .81 belongs to RICHARDS_COMPUTER I did try flushdns. Not sure if this is related, apologies if it's not, but when I try to connect, I also get prompted: "The identity of the remote computer cannot be verified. Do you want to connect anyway? The remote computer could not be authenticated due to problems with its security certificate. It may be unsafe to proceed.." It then lists the correct name as the name in the certificate from the remote computer, but claims that the certificate is not from a trusted authority. Any thoughts are most appreciated!

    Read the article

  • USB Diskdrive cannot be formatted nor accessed

    - by Dmolish
    So I have just recently bought and 8GB USB stick(Kingston DT 100 G2) on which I had installed Linux. However I needed to reinstall said Linux so I formatted the stick to "default" settings which includes FAT32 filesystem. Later when the install process kept getting errors, I got advice that the problem might be with the FAT filesystem. I decided to try and format the stick to NTSF (format G:/fs:ntsf) but the formatting failed and the drive broke down. And with breaking down I mean you cannot access the drive anymore and when you plug it in Windows asks if I want to format the drive but despite my will the format always fails. To fix this I tried changing it back to FAT32 (format G:/fs:fat32), but i get "Error in IOCTL-call". Second thing I tried was trying to reset the filesystem with some 3rd party application like HP USB Disk Storage Format Tool. But the programs didn´t regocnize any media on the drive. So now I´m in the situation that I haven´t got any idea on what to do next. Is the drive recoverable or did I just create a piece of waste metal.

    Read the article

< Previous Page | 672 673 674 675 676 677 678 679 680 681 682 683  | Next Page >