Search Results

Search found 46073 results on 1843 pages for 'rating system'.

Page 745/1843 | < Previous Page | 741 742 743 744 745 746 747 748 749 750 751 752  | Next Page >

  • iSCSI, failover and XenServer

    - by jemmille
    I have an iSCSI fail over implementation setup so if one of my storage units fails the other takes over immediately (it also runs the NFS shares). When fail over occurs, volumes are exported, the IP is switched to the other machine and the targets are reconfigured. The fail over of the storage system itself works just fine. I use NexentaStor for my filer. When I do a test (manual) fail over of my storage the following occurs: Note: I run the admin VM's on NFS and customer based VM's on iSCSI All NFS based VM's remain up and working perfectly through the failover and after All VM 's running on iSCSI eventually report the following: An error about not being able to write to a particular block An error about journaling not working Then the file system goes RO To get the VM's working again I have to do the following: Force shutdown of the "broken" VM's. Detach the iSCSI SR Re-attach the iSCSI SR Boot the VM on a different server (5 in my pool) If I don't boot on a different server I get this error "Internal error: Failure("The VDI <uuid&gt; is already attached in RW mode; it can't be attached in RO mode!")" The only way I have found to fix that error is to reboot the entire server it was running on previously which is obviously a huge pain. Currently multipathing is NOT enabled (but can be and the same thing still occurs). I have edited much of the /etc/iscsid.conf file to work with the timeout settings but to no avail. In short, my storage fails over properly but XenServer does not keep the connection alive. As a thought, the error that shows up in #4 above might be the ultimate cause and fixing that would fix everything? Any help would be appreciated more than you know.

    Read the article

  • Tomato OS: "memory exhausted" running vi .... how to solve?

    - by Sam Jones
    I have set up tomato (shibby) on an asus RT-N66U router. It works great. I loaded up a few pieces, like transmission and optware. I can run vi, but when I run vi it fails with a "memory exhausted" error, and the terminal session hangs. For reference: If I simply start "vi" it runs fine. But if I specify vi I get the memory exhausted error, even if the file I am opening is just a couple of hundred bytes in size (like fstab). I discovered that my swap partition was not properly set up, so I did that. The swapon command now indicates I really do have a swap: [root@MyRouter samba]$ swapon -s Filename Type Size Used Priority /dev/sda1 partition 32900860 0 1 How can I get vi to work? Thanks! System setup reference information: asus RT-N66U router 2TB usb hard drive partitions on hard drive: Disk /dev/sda: 2000.4 GB, 2000398839808 bytes 255 heads, 63 sectors/track, 30400 cylinders Units = cylinders of 16065 * 4096 = 65802240 bytes Disk identifier: 0xfacbc8ab Device Boot Start End Blocks Id System /dev/sda1 1 512 32900868 82 Linux swap / Solaris /dev/sda2 513 29000 1830638880 83 Linux running samba memory: $ cat /proc/meminfo MemTotal: 255840 kB MemFree: 210980 kB Buffers: 5264 kB Cached: 22768 kB SwapCached: 0 kB Active: 20272 kB Inactive: 11448 kB HighTotal: 131072 kB HighFree: 99868 kB LowTotal: 124768 kB LowFree: 111112 kB SwapTotal: 32900860 kB SwapFree: 32900860 kB Dirty: 0 kB Writeback: 0 kB TIA!

    Read the article

  • IPv4 not enabled in Windows 7

    - by RidDeBakTiYar
    I have a Netbook with Win 7 Starter edition. I am not able to get IPv4 enabled/installed in the Netbook and hence not able to network/connect to Internet. It says "Limited access" when I check connections. It all began when my system crashed and I was not able to get thro beyond the boot screen. The system advice me to try repair with the original OS cd and I did the same. After that the Netbook has booted, but is not able to network. On a detailed study I found that the Network adapter has both IPv4 and IPv6 Protocols, but there are some differences. The IPv4 is not having the "Properties" enabled to configure, while the IPv6 is having "Properties" enabled. I have only a IPv4 Wired and Wireless at home and its not able to get the Netbook connected as it cannot I had tried uninstalling the Adapter to allow windows to automatically detect and install. Using the "netsh" command to uninstall and install, but not having any change in status. There is a option to "Have disk" under add Protocol, but I don't know how to give the standard IPv4 for installation (asking for *.inf). Any help in solving the issue will be very much appreciated

    Read the article

  • Need clear steps on how to convert a Windows 2000 Server to a XenServer VM

    - by Jay
    The source system is not local. The target host running XenServer is not local. The source system is running Windows 2000 Server SP4 and has 1 disk split into 6 partitions, all NTFS: C: 6 GB (boot) D: 15 GB E: 6 GB F: 6 GB G: 5 GB H: 26 GB Most of the partitions are mostly mostly full ( 60%). What is the most straightforward way to do a P2V migration of the server? I can do minor database & data syncs after the P2V is successful & running as a VM within XenServer, it's just getting to that point which is not clear. The option of installing a Windows 2000 Server from scratch is not available, I need to convert the existing physical server as-is into a VM to be hosted within a XenServer environment. I've looked at XenConvert but it maxes out on converting only 4 partitions in one shot, and I'm not certain how to account for the 2 extra partitions. I'm not familiar with XenServer but it's my only option right now to go P2V.

    Read the article

  • ASUS P8B WS - Endless Reboots

    - by tuxGurl
    I am running a Intel XEON 1245 with 4GBx2 Kingston Memory ECC Unbuffered DDR3 on an ASUS P8B WS motherboard. BIOS Version 0904 x64. This system is a little over a month old. It is running Ubuntu 11.10. This evening I found the machine turned off. When I tried to restart it, it would POST and stop at the GRUB screen. When I selected Ubuntu and hit enter within 2-3 seconds the would shutdown and restart. If I stayed at the GRUB screen and did nothing the system would not cut out. I tried booting off a USB stick and again 2-3 seconds after selecting 'Try Ubuntu without Installing' the machine will cut power and reboot. Things I have tried so far: Resetting the BIOS using the on board jumper Resetting the BIOS settings to default Disconnecting all external hardware - except keyboard & monitor Booting with 1 stick of RAM - I tried different single sticks Ensured that onboard EPU and GPU boost switches are in the off position. I am running a Memtest86 right now and it has been running for 38+ minutes. This is not an OS problem or an overheating issue (I have a CoolerMaster HAF Case with 3 fans besides the CPU fan) I am at a loss as to what to try next. I think the BIOS is mis-configured somehow but I don't know what to look for.

    Read the article

  • Qmail Patching Makes me Nervous

    - by JM4
    We have a system running CentOS 5 with Plesk 8.6 and Qmail running. Our primary domain is hosted through Media Temple. When Plesk and Qmail are hosted on a single Dedicated Virtual server, it reads the primary server IP and domain and reports that when sending emails from the system. Our pages are written in PHP so we are using the mail() function. While our email goes out to everybody, several enterprise email domains reject our email because it shows a different originating IP (our primary server IP and domain) than the domain we list in the 'from' address. This is not modifiable. Every domain we own of course does have its own IP as well underneath our primary server IP. I have seen several places online that provide a patch, specifically - which allows Domain Binding: "DomainBindings -- For servers that host multiple domains or have multiple IP addresses assigned to them, it is sometimes useful (or important) to have qmail use a specific IP address for its outgoing mail. By default, qmail uses whatever address the OS chooses for all outbound connections. With this patch, you can specify which address to use. It uses a control file similar to smtproutes to specify the outbound IP address to use, based on the sender's domain (local copy) (pyropus.ca)" Qmail Link First off I do not have netqmail installed so I'll need to find another source, but also I am completely unfamiliar with applying patches to qmail. Will I lose email services if I patch? Is it a simple apply and use process? Will my existing email accounts and data be restored after the patch? I am very, very new to unix/linux so this does make me a bit nervous but I am the only person who can make the change and it is one our company "HAS" to have. Any ideas?

    Read the article

  • Installing SQLServer 2005 on Windows 7 64bit

    - by Mostafa
    Hi , It's 3 days I'm trying to install SqlServer 2005 under Windows 7 64 bit on my computer. First let me tell you what I've done and what I've got till now . 1-I Installed Windows 7 64 Bit on my computer 2-I tried to install SQl Server 2005 "Developer Edition" 2.1 But in "System Configuration Check" Page i recieved 2 warning , One for "IIS Feature Requirement" and another for "ASP.NET Version Registration Rquired" . 2.1.1 . I installed "Internet Information Services" from "Turn Windows features on or off" section in control panel 2.1.2 I Enabled reporting service 32 bit from "Inetpub= AdminScripts = adsutil.vbs" 2.2 At this stage There was no waring in System Configuration Check 3- So I installed SQl Server 2005 Developer Edition By all default settings 4- I installed Sql Server 2005 Service Pack 3 64 bit Now when when i run "Management Studio" There is no name in "Server name" section . I typed my Computer name Or "." and i got this Error : A network -related instance-specific error occurred while establishinga connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (Provider: Named Pipes Provider , error :40 - Could not open a connection to SQL Server ) ( Microsoft SQL Server , Error :2) . I googled some for this Error and some people said follow this instruction: Startsql server 2005Configuration toolsSql Server Surface Configuration AreaSurface Area Configuration for services and Connections But i got this Error : No SQl SErver 2005 Components were found on the specified computer . Either no components are installed , or you are not a administrator on this computer (SQLSAC) I'm really tired because of that , and i don't know what's wrong with this . Some more information : I have no additonal software on my computer , like Antivirus or Proxy I tried all step with "Standard Edition" either , but no difference My user is Administrator I tried more than 5 times all those steps including re-installing Windows 7 . Please help me , I'm losing all my hair

    Read the article

  • Can't Connect To Local Mysql Using IP Address, but CAN connect from remote server

    - by user1782041
    Here's an interesting one that does not seem to fall into any of the mysql connection issues I've read about or searched for: On an Ubuntu 12.04 box I had some system updates waiting to install, and I took care of that this evening. After the install, I started seeing some errors in my syslog complaining about a particular php script that could no longer connect to the mysql instance on the box. Here is the specific error: PHP Warning: mysql_connect(): Can't connect to MySQL server on '192.168.0.40' (4) Now, the server's IP address is 192.168.0.40, and I've checked to make sure that I have mysql listening on 0.0.0.0 so that I can connect using either "localhost" or "192.168.0.40". Here's where things get odd: From the local machine, if I try the following: mysql -uroot -p -h192.168.0.40 I get this error: ERROR 2003 (HY000): Can't connect to MySQL server on '192.168.0.40' (110) I've checked, and error 110 indicates an OS timeout, and error 2003 is the mysql generic "can't connect" error. This indicates that it is not permissions with the user. However, if I do the same thing from a remote machine (say, from 192.168.0.30), I log right in with no problems. Futher, other scripts on the local machine that connect to mysql using "localhost" for the host rather than "192.168.0.40" connect with no problems. Also, I can connect via the mysql socket with no problems both from the command line and php scripts. So, this feels like a networking issue of some kind on the local box, but there are no iptables rules on this box (it is firewalled externally) and I can't figure out what else may be causing this. This problematic script worked perfectly prior to the latest system update. For now, I'll simply change the script to connect via localhost, but I'd really like to know why it broke for 2 reasons: There may be other scripts that connect using 192.168.0.40 that don't run very often which are now broken. Auditing them all will take more time than I feel like devoting at the moment. I'm curious, and want to know why it broke so I can fix it correctly. Any help?

    Read the article

  • With a username passed to a script, find the user's home directory

    - by Clinton Blackmore
    I am writing a script that gets called when a user logs in and check if a certain folder exists or is a broken symlink. (This is on a Mac OS X system, but the question is purely bash). It is not elegant, and it is not working, but right now it looks like this: #!/bin/bash # Often users have a messed up cache folder -- one that was redirected # but now is just a broken symlink. This script checks to see if # the cache folder is all right, and if not, deletes it # so that the system can recreate it. USERNAME=$3 if [ "$USERNAME" == "" ] ; then echo "This script must be run at login!" >&2 exit 1 fi DIR="~$USERNAME/Library/Caches" cd $DIR || rm $DIR && echo "Removed misdirected Cache folder" && exit 0 echo "Cache folder was fine." The crux of the problem is that the tilde expansion is not working as I'd like. Let us say that I have a user named george, and that his home folder is /a/path/to/georges_home. If, at a shell, I type: cd ~george it takes me to the appropriate directory. If I type: HOME_DIR=~george echo $HOME_DIR It gives me: /a/path/to/georges_home However, if I try to use a variable, it does not work: USERNAME="george" cd ~$USERNAME -bash: cd: ~george: No such file or directory I've tried using quotes and backticks, but can't figure out how to make it expand properly. How do I make this work?

    Read the article

  • Prestashop is not saving Memcached settings

    - by ianenri
    I have a issue with the admin site of prestashop. I'm trying to activate the caching system but it doesn't save the setting. When I add a server (I'm using an Amazon ElasticCache server) saves it, but when I select the enable option and click Save, it redirects me to "Back Office Preferences Performance" but with a blank page and the admin tabs visible. I go back to those settings again and i see that the caching option is disabled. Also, there is a warning: "To use Memcached, you must install the Memcache PECL extension on your server. http://www.php.net/manual/en/memcache.installation.php" , even if i already installed memcached via yum. I tried also, by modifying the settings.inc.php file editing define('_PS_CACHE_ENABLED_', '0'); to: define('_PS_CACHE_ENABLED_', '1'); But i get 500 Internal Server Error in every page, so i prefer to leave it as before. Any ideas? I'm using PrestaShop 1.4.6.2 with Nginx 1.0.11 and PHP-FPM 5.3.8 in a CentOS 5.7 system.

    Read the article

  • 64 bit vs 32 bit

    - by user53864
    When I was doing my course MCSA, I'm taught the following: With a 32-bit processor only 32-bit operating system can be installed. with a 64-bit processor both 32-bit & 64-bit operating system can be installed It's said 64-bit os cannot be installed on a 32-bit processor. I just want to make sure the above points because recently I'm asked to installed Windows Server 2008 R2 Enterprize and while installation it showed only x64 and it simply installed it. I was thinking all the computers in my office having a 32-bit processor. If so how it could be possible to install a x64 bit os on a 32-bit processor? or I'm wrong with the 1st point or the processor may be of 64-bit(I don't know how to check). I'm confused... One thing what I know the benefits of 64-bit over 32-bit is faster operation. If anyone could tell me other benefits, it could be helpful for me. Thanks!

    Read the article

  • 64 bit vs 32 bit

    - by user51737
    When I was doing my course MCSA, I'm taught the following: With a 32-bit processor only 32-bit operating system can be installed. with a 64-bit processor both 32-bit & 64-bit operating system can be installed It's said 64-bit os cannot be installed on a 32-bit processor. I just want to make sure the above points because recently I'm asked to installed Windows Server 2008 R2 Enterprize and while installation it showed only x64 and it simply installed it. I was thinking all the computers in my office having a 32-bit processor. If so how it could be possible to install a x64 bit os on a 32-bit processor? or I'm wrong with the 1st point or the processor may be of 64-bit(I don't know how to check). I'm confused... One thing what I know the benefits of 64-bit over 32-bit is faster operation. If anyone could tell me other benefits, it could be helpful for me. Thanks!

    Read the article

  • Kernel panic while loading Mac OS X on VMWare

    - by Vladimir Gritsenko
    I'm trying to run Mac OS X 10.6.3 on VMWare 7.1 from within Windows 7 64 bit. Trying to run the OS X doesn't work - shortly after booting VMWare announces the machine's death with this pop-up message: The CPU has been disabled by the guest operating system. You will need to power off or reset the virtual machine at this point. The console has a more interesting announcement: The real-time clock was not properly initialized on your system! With a dump of the detected CPU speed (~2.9 GHz), FSB (~94 MHz) and bus ratio (31). Somehow, the code that panics is documented in an accidental .diff file here. Apparently, it commits seppuku if the bus ratio is greater than 30. I underclocked my E6500 to a slower speed, but this apparently didn't make any difference. I can think of two possibilities right now: The CPU info being read is constant, and defines the maximum ability of the CPU. In which case it appears I'm screwed. VMWare presents its own CPU info to the machine, which I can perhaps somehow change. If so, how? If these sound completely off, that's because I'm a real newbie in these matters. Here's hoping you guys can show me the light :-)

    Read the article

  • Video card not detected on Lenovo T410 in Linux

    - by wich
    I have a T410 with an nVidia NVS 3100M, this is not a hybrid system, there is no Optimus. (No option in the BIOS for Optimus, lspci in linux as well as the Windows device manager only show the nVidia) Using lspci I see the GPU as a present device, however, I can not, for the life of me, get any video driver to work that will let me start an X session, every time X craps out with the error (EE) No devices detected. I have tried the nVidia binary blob, (with nvidia-config, made sure no nvidia support in the kernel), I have tried nouveau, I have tried nv, I have even tried generic vesa, nothing will work. When I compare the dmesg that I get when loading the nvidia kernel module, I see that it is missing some lines compared with another system that also has an nvidia card, specifically the line mentioning the GPU name (3100M) is not there. I have checked every option in the BIOS, there is nothing to control except for the BIOS video output port, which is set to the LCD panel. I have no idea anymore what the problem may be, or even how I can diagnose this problem further. Any help will be appreciated.

    Read the article

  • HP c6180 driver refuses to install - "must reboot to continue delete files and continue install" loo

    - by Aszurom
    User called me last night, can't get HP drivers to install for printer. 500 meg download for latest c6180 driver. PC says "computer must be restarted to delete some files, then setup will continue." and it won't pass that point. (that's not exact phrasing of the error, and I'm not on the system now) Target machine is XP SP2. I upgraded to SP3. Fully patched. Ran malwarebytes, spybot, hijack this and turned off all startup entries. I noted that windows update also failed on optional printer driver updates for other printers installed on the machine. Went to printer server properties, removed all print drivers from system, deleted all printer ports. Made sure no services were marked disabled. After 5 hours of fighting this I started to get desperate, and uninstalled anything that referenced HP in the machine at all. Still no cure. I'm 6 hours into this and completely stumped. Google returns nothing applicable except unanswered requests for help on same topic.

    Read the article

  • 554 - Sending MTA’s poor reputation

    - by Phil Wilks
    I am running an email server on 77.245.64.44 and have recently started to have problems with remote delivery of emails sent using this server. Only about 5% of recipients are rejecting the emails, but they all share the following common message... Remote host said: 554 Your access to this mail system has been rejected due to the sending MTA's poor reputation. As far as I can tell my server is not on any blacklists, and it is set up correctly (the reverse DNS checks out and so on). I'm not even sure what the "Sending MTA" is, but I assume it's my server. If anyone could shed any light on this I'd really appreciate it! Here's the full bounce message... Could not deliver message to the following recipient(s): Failed Recipient: [email protected] Reason: Remote host said: 554 Your access to this mail system has been rejected due to the sending MTA's poor reputation. If you believe that this failure is in error, please contact the intended recipient via alternate means. -- The header and top 20 lines of the message follows -- Received: from 79-79-156-160.dynamic.dsl.as9105.com [79.79.156.160] by mail.fruityemail.com with SMTP; Thu, 3 Sep 2009 18:15:44 +0100 From: "Phil Wilks" To: Subject: Test Date: Thu, 3 Sep 2009 18:16:10 +0100 Organization: Fruity Solutions Message-ID: MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----=_NextPart_000_01C2_01CA2CC2.9D9585A0" X-Mailer: Microsoft Office Outlook 12.0 Thread-Index: Acosujo9LId787jBSpS3xifcdmCF5Q== Content-Language: en-gb x-cr-hashedpuzzle: ADYN AzTI BO8c BsNW Cqg/ D10y E0H4 GYjP HZkV Hc9t ICru JPj7 Jd7O Jo7Q JtF2 KVjt;1;YwBoAGEAcgBsAG8AdAB0AGUALgBoAHUAbgB0AC0AZwByAHUAYgBiAGUAQABzAHUAbgBkAGEAeQAtAHQAaQBtAGUAcwAuAGMAbwAuAHUAawA=;Sosha1_v1;7;{F78BB28B-407A-4F86-A12E-7858EB212295};cABoAGkAbABAAGYAcgB1AGkAdAB5AHMAbwBsAHUAdABpAG8AbgBzAC4AYwBvAG0A;Thu, 03 Sep 2009 17:16:08 GMT;VABlAHMAdAA= x-cr-puzzleid: {F78BB28B-407A-4F86-A12E-7858EB212295} This is a multipart message in MIME format. ------=_NextPart_000_01C2_01CA2CC2.9D9585A0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit

    Read the article

  • How can I do an SELINUX filesystem relabel without rebooting first?

    - by Skaperen
    I can touch the file /.autorelabel and reboot and during the initialization coming back up it will do the SELINUX relabel for me. But I want to do this in a different situation where the system has just been copied to a hard drive image. I can chroot to the originating file tree, or chroot to the just populated device image and run it. I just can't find anything that says what to be run. This image is being made into an AMI on AWS EC2, and contains CentOS 6.3. But the time it takes to relabel is too long (6 minutes or more). I want to move the relabel to the image build where the extra time is not an issue (because it happens once instead of every time an AMI is launched). I can make this relabel be the very last thing just before the filesystem is unmounted for the last time until it becomes an AMI and will launch. I just need to know what to call to do it. I have searched man pages with no luck. I have searched system init scripts but where /.autorelabel is detected, it is unclear what is happening. Documents like http://www.centos.org/docs/5/html/5.2/Deployment_Guide/sec-sel-fsrelabel.html only tell how to do things that still really do the work after a reboot. I need to have the work doing BEFORE the "reboot" (unmount, build AMI, and launch ready to go). The big point is ... yes there will be a reboot ... but I want the relabel work to be done before that so it won't be done every time an AMI is launched (because it takes so long).

    Read the article

  • SharePoint Backup/Restore without stsadm

    - by Kevin
    Due to problems we found with the restore of sites/site collections using stsadm (our tasks generated from workflows were not restored), we've taken a different route for backup/restore. We plan a major customization to our SP site and want to take a backup so we can rollback in case the install fails. In our System Testing (not production) environment, we've backed up the 12 hive, the virtual dir's that the IIS points to SharePoint, and the SharePoint databases in SQL (using SQL server to do the db backups). We have custom event handlers and workflows built with Visual Studio, and deploy the dlls to the GAC as version 2 (signed and versioned in Visual Studio). So when we deploy, the GAC will contain 2 versions of the workflows - version 1 and version 2. During the deploy we use SP stsadm features to install/activate the WF's. We also go to each library and add the new, version 2 WFs. This automatically sets the version 1 WF's to "Not Allow" new instances (which is what we want) and the version 2 as active - perfect so far. When we've completed the install, we then assume a failure and attempt to restore to the same machines (SharePoint on one server, SQL on another). We start by uninstalling the version 2 WF's from the GAC, reset IIS (to clear cache of these ver. 2 WF dlls'), restore the 12-hive and virtual directory folders, then restore the SQL dbs. This is all just as manual as you read it - no stsadm here. All seems to work after our restore, it appears the restore was successful - the mods we made to column names, data changes, etc during the install are all reverted back to the original pre-install state. With one exception. When we run a workflow, it always fails and the Logs in the 12-hive indicates the WF is still trying to use the version 2 of the dll (System.IO file not found error) We think we've backed up and restored all the moving pieces of Sharepoint but we're missing something here, does anybody have any ideas why the version 2 WF dlls are still being referenced eventhough we restored all the folders and db's of SharePoint? Thanks, Kevin

    Read the article

  • Ballooning Mac OS X kernel_task and Wired memory usage. How to diagnose / fix?

    - by user28930
    I have a very strange issue, which I'm having a hard time diagnosing as to the root cause. I have a Mac Pro (2008, 8-core 2.8 GHz, 8800GT) with 14 GB of RAM (recently upgraded because of this issue!). When I boot my system and log in, vm_stat / top / Activity Monitor will show that kernel_task has about 150 MB allocated, and the machine has about 800 MB of Wired memory being allocated. Even initially, 800 MB seems an awful lot of wired memory to be allocated with no applications running - but, it gets worse. (NB: Wired is locked, unswappable memory) After a very short time, sometimes triggered by something as simple as launching a terminal, kernel_task will balloon to 8-900 MB of Real Mem (RSIZE), and Wired Memory will accelerate to 1.6 GB (implying that all the extra memory requests are for wired RAM in the kernel). If I quit everything (I.E: no running applications, bar an activity monitor or terminal to view top), there is no appreciable reduction in either kernel_task RSIZE, or Wired Memory usage. Going the opposite way, and loading the system with tasks also shows that wired memory does not get reduced - and that importantly it is not reduced in preference to heavy swapping. If I log out and log back in again, it reduces a bit (450 MB kernel_task, 1.28 GB Wired), but not back to the start. I'm not running any wacky kexts - and futhermore, kextstat shows no huge memory allocations there; the largest being com.apple.nvidia.nv50hal at about 4 MB of Memory. The machine feels overall more sluggish when this has happened - unsurprisingly because such a huge amount of RAM has been marked as non-pageable. So I have a few questions: 1) Is there a good way to diagnose what has allocated all of this wired memory? It's often over 2 times the kernel_task size, running no applications. The real memory total doesn't seem to add up - it seems that there is a bunch of RAM that isn't being accounted for anywhere. 2) What is happening to cause the kernel to suddenly require 6 times as much memory?

    Read the article

  • Powershell: Execute exe on remote server and capture output

    - by user364825
    I am trying to script the execution of an installer on remote web servers. The installer in question is also a Windows Service that hosts NServiceBus. If RDP'd into the server, the application is installed by the following command: &"$theInstaller" /install /serviceName:TheServiceName The installer prints output about its progress registering the service and connecting to the database to stdout, among other things. This works fine from an RDP session, but when I execute it remotely via PS, I get a you-can't-do-this-over-the-network message if I execute it directly or via Invoke-Command -computername $theRemoteServer: System.IO.FileLoadException: Could not load file or assembly 'file://\\theRemoteServer\c$ \thePath\AutoMapper.dll' or one of its dependencies. Operation is not supported. (Exception from HRESULT: 0x80131515) --- System.NotSupportedException: An attempt was made to load an assembly from a network location which would have caused the assembly to be sandboxed in previous versions of the .NET Framework. This release of the .NET Framework does not enable CAS policy by default, so this load may be dangerous. If this load is not intended to sandbox the assembly, please enable the loadFromRemoteSources switch. See http://go.microsoft.com/fwlink/?LinkId=155569 for more information. (Note: I added an additional "\" to the path in the first line in order to get it to show up correctly in the preview on this site.) This, and other DLLs, are loaded by the service, and the service's execution context cannot, apparently, be remotified. I have also tried using Invoke-WmiMethod, which does something, but it's not clear what, and the output from the installer is lost: Invoke-WMIMethod win32_process create '"$theInstaller" /install /serviceName:TheServiceName' -ComputerName $server (with and without cmd.exe /k before the intaller reference): __GENUS : 2 __CLASS : __PARAMETERS __SUPERCLASS : __DYNASTY : __PARAMETERS __RELPATH : __PROPERTY_COUNT : 2 __DERIVATION : {} __SERVER : __NAMESPACE : __PATH : ProcessId : ReturnValue : 9 How does one remotely execute such an EXE and capture the output? Thanks!

    Read the article

  • Oracle’s Web Experience Management

    - by Christie Flanagan
    Today’s guest post on Oracle’s Web Experience Management comes from a member of our WebCenter Evangelist team, Noël Jaffré, a Principal Technologist based in France.Oracle’s Web Experience Management (WEM) solution enables organizations to optimize the online channel for driving marketing and customer experience management success. It empowers business users to manage the web presence and create rich and engaging online experiences for customers and prospects. Oracle's WEM platform provides a framework to simplify the integration of Oracle, third-party and custom-built applications. This framework essentially allows the creation and integration of applications using one single business interface called the WEM interface. It includes the following: Single sign-on access control for all integrated applications using the Central Authentication Service (CAS) component. A single centralized administration window for user, role, and native applications management including site management. Community server management, gadget server management as well as management for partner integrated technologies. A Representational State Transfer (REST) API for accessing WebCenter Sites data. REST services are supported on both Oracle WebCenter Sites and Oracle WebCenter Sites Satellite Server to leverage the satellite server cache. All REST requests are cached for web consuming applications as well for the high performance delivery of native applications on the mobile channel. Oracle WebCenter Sites’ Web Experience Management environment enables organizations to deliver a compelling online experience to customers by simplifying the deployment and management of sophisticated and engaging websites. The WebCenter Sites platform automates the entire process of managing web content including: Authoring:  Business users can easily contribute and manage web content in real-time, with intuitive interfaces and drag-and-drop content authoring and layout capabilities designed for the non-technical user. Contextual Content Targeting: Marketers are empowered to create and manage targeted campaigns with relevant recommendations and promotions based on the context of the session of the visitor such as his or her navigation history, user profile, language, location or other information shared during the visitor session. Content Publishing and Deployment: It offers advanced multi-site management capabilities for departmental or regional sites, as well as strong multi-lingual and multi-locale content management. The remote satellite server caching infrastructure provides high-performance, distributed caching, tuned to deliver high-volume, targeted and multi-lingual sites. Analytics and Optimization: Business users and marketers have the ability to measure the effectiveness of their online content and campaigns at a granular level. Editors and marketers can immediately determine whether a given article or promotion is relevant to a particular customer segment. User-generated Content: Marketers can enable blogs, comments, rating and reviews on the website.  All comments and reviews posted to the website can be moderated from the administrator interface either manually or automatically using filters, whitelists, blacklists or community based moderation. Personalized Gadget Dashboards:  Site managers can deploy gadgets, small applications using web content, individually or as part of dashboards containing multiple gadgets.  These gadget dashboards enable site visitors to create their own “MyPage” on a given site where they can select and customize the gadgets that the site administrator has made available.  Any gadget that conforms to the iGoogle/OpenSocial standard can be made available to site visitors, or they can be created within the WEM interface. Oracle's WEM platform also provides a unique environment for the delivery of a rich, multichannel online experience for site visitors through its advanced management modules for mobile. With Oracle’s WEM solution, it’s easy to control branding and deliver a consistent message while repurposing web content for publication to mobile devices, kiosks and much more. This distinctive approach provides: HTML5 Delivery: HTML5 delivery which includes native support for adaptive design that responds to the user’s computer screen resolution and orientation. The approach is less driven by the particular hardware and more driven by the user’s interactions with the device. In other words, this approach takes both the screen interactions (either cursor or touch) and screen sizes and orientation into consideration. A Unique Native Mobile Extension Environment for Contributors: From the WEM interface, a contributor can directly manage their mobile channel, using the tooling already in place for driving the traditional web presence. This includes the mobile presentation, as well as mobile insite editing, drag and drop page layout, and in-context recommendations and personalization. Optimized REST APIs for High Performance Content Delivery on Native Mobile Device Applications: WebCenter Sites’ REST API uses the underlying HTTP methods (GET, POST, PUT, DELETE) to interact with resources. Resources support two types of input and output formats -- XML and JSON. REST calls are customizable to optimize the interactions between the content repositories and the client applications. Caching is essential to decrease network loads and improve overall reliability and usability of the applications and user interactions. REST results are cached through the highly efficient Oracle WebCenter Sites caching architecture.

    Read the article

  • Take snapshot of drawing using FingerPaint

    - by Rashmi.B
    I am using MyView for drawing content on a canvas using FingerPaint API demo app. I want to capture whatever I have written on the canvas. But when I use View v1 = myview.getRootView() it is returning only the blank canvas and not the content. I want to save my drawing in SDCard. Following is my code. Let me know what do i need to change v1 = myview.getRootView(); System.out.println("v1 value = "+v1); v1.buildDrawingCache(true); v1.measure(MeasureSpec.makeMeasureSpec(0, MeasureSpec.UNSPECIFIED), MeasureSpec.makeMeasureSpec(0, MeasureSpec.UNSPECIFIED)); //v1.layout(0, 0, v1.getMeasuredWidth(), v1.getMeasuredHeight()); v1.layout(0, 0, 100, 100); //Bitmap b = Bitmap.createBitmap(v1.getDrawingCache()); myview.mBitmap = Bitmap.createBitmap(v1.getDrawingCache()); System.out.println("BITMAP VALue = "+myview.mBitmap); ByteArrayOutputStream bytes = new ByteArrayOutputStream(); //b.compress(Bitmap.CompressFormat.JPEG, 40, bytes); File f = new File(Environment.getExternalStorageDirectory()+ File.separator + "rashmitest.jpg"); try { f.createNewFile(); FileOutputStream fo = new FileOutputStream(f); fo.write(bytes.toByteArray()); } catch (Exception e) { e.printStackTrace(); } v1.setDrawingCacheEnabled(false); myview is an object of class MyView that extends View.

    Read the article

  • Issues installing new drivers

    - by Luke
    I have a Windows XP Home SP3 system that won't detect anything on USB. It works on Ubuntu Live (off USB), and the USB keyboard and mouse work in the BIOS. Physically speaking, I'm sure it's fine. I installed the SMBus drivers and the USB driver from the motherboard's website, adn that went fine. If I plug anything in, it can detect the type of thing it is (i.e. keyboard, mouse, flash drive, etc) and even the name sometimes (i.e. Microsoft 5 button mouse), but won't accept any drivers. I have tried putting the Windows CD in the drive, but that didn't help. I have scanned for viruses and CHKDSK with no issues, and ran a MemTest86 with no issues. I am limited to one PS/2 connection for inputs, so I'm using the keyboard and haven't tried WU yet. A colleague suggested trying a new USB controller, so I put in a PCI one that only had drivers for 9x on the CD, so I assume that XP has them built in. It goes through the Found New Hardware wizard, but never actually finds drivers. I have also tried running SFC /SCANNOW and System Restore. SFC just flashes and goes away, making me believe it may be a hidden virus somewhere, but everything else seems to work, including MSE. I have reason to believe it's just an issue with detecting hardware, since even the USB Controller card can't seem to find drivers, but it can detect WHEN a USB device is connected Anyone else run into this, or have a suggestion short of re-installing Windows?

    Read the article

  • How to Buy an SD Card: Speed Classes, Sizes, and Capacities Explained

    - by Chris Hoffman
    Memory cards are used in digital cameras, music players, smartphones, tablets, and even laptops. But not all SD cards are created equal — there are different speed classes, physical sizes, and capacities to consider. Different devices require different types of SD cards. Here are the differences you’ll need to keep in mind when picking out the right SD card for your device. Speed Class In a nutshell, not all SD cards offer the same speeds. This matters for some tasks more than it matters for others. For example, if you’re a professional photographer taking photos in rapid succession on a DSLR camera saving them in high-resolution RAW format, you’ll want a fast SD card so your camera can save them as fast as possible. A fast SD card is also important if you want to record high-resolution video and save it directly to the SD card. If you’re just taking a few photos on a typical consumer camera or you’re just using an SD card to store some media files on your smartphone, the speed isn’t as important. Manufacturers use “speed classes” to measure an SD card’s speed. The SD Association that defines the SD card standard doesn’t actually define the exact speeds associated with these classes, but they do provide guidelines. There are four different speed classes — 10, 8, 4, and 2. 10 is the fastest, while 2 is the slowest. Class 2 is suitable for standard definition video recording, while classes 4 and 6 are suitable for high-definition video recording. Class 10 is suitable for “full HD video recording” and “HD still consecutive recording.” There are also two Ultra High Speed (UHS) speed classes, but they’re more expensive and are designed for professional use. UHS cards are designed for devices that support UHS. Here are the associated logos, in order from slowest to fastest:       You’ll probably be okay with a class 4 or 6 card for typical use in a digital camera, smartphone, or tablet. Class 10 cards are ideal if you’re shooting high-resolution videos or RAW photos. Class 2 cards are a bit on the slow side these days, so you may want to avoid them for all but the cheapest digital cameras. Even a cheap smartphone can record HD video, after all. An SD card’s speed class is identified on the SD card itself. You’ll also see the speed class on the online store listing or on the card’s packaging when purchasing it. For example, in the below photo, the middle SD card is speed class 4, while the two other cards are speed class 6. If you see no speed class symbol, you have a class 0 SD card. These cards were designed and produced before the speed class rating system was introduced. They may be slower than even a class 2 card. Physical Size Different devices use different sizes of SD cards. You’ll find standard-size CD cards, miniSD cards, and microSD cards. Standard SD cards are the largest, although they’re still very small. They measure 32x24x2.1 mm and weigh just two grams. Most consumer digital cameras for sale today still use standard SD cards. They have the standard “cut corner”  design. miniSD cards are smaller than standard SD cards, measuring 21.5x20x1.4 mm and weighing about 0.8 grams. This is the least common size today. miniSD cards were designed to be especially small for mobile phones, but we now have a smaller size. microSD cards are the smallest size of SD card, measuring 15x11x1 mm and weighing just 0.25 grams. These cards are used in most cell phones and smartphones that support SD cards. They’re also used in many other devices, such as tablets. SD cards will only fit into marching slots. You can’t plug a microSD card into a standard SD card slot — it won’t fit. However, you can purchase an adapter that allows you to plug a smaller SD card into a larger SD card’s form and fit it into the appropriate slot. Capacity Like USB flash drives, hard drives, solid-state drives, and other storage media, different SD cards can have different amounts of storage. But the differences between SD card capacities don’t stop there. Standard SDSC (SD) cards are 1 MB to 2 GB in size, or perhaps 4 GB in size — although 4 GB is non-standard. The SDHC standard was created later, and allows cards 2 GB to 32 GB in size. SDXC is a more recent standard that allows cards 32 GB to 2 TB in size. You’ll need a device that supports SDHC or SDXC cards to use them. At this point, the vast majority of devices should support SDHC. In fact, the SD cards you have are probably SDHC cards. SDXC is newer and less common. When buying an SD card, you’ll need to buy the right speed class, size, and capacity for your needs. Be sure to check what your device supports and consider what speed and capacity you’ll actually need. Image Credit: Ryosuke SEKIDO on Flickr, Clive Darra on Flickr, Steven Depolo on Flickr

    Read the article

  • 613 threads limit on debian

    - by Joel
    When running this program thread-limit.c on my dedicated debian server, the output says that my system can't create more than around 600 threads. I need to create more threads, and fix my system misconfiguration. Here are a few informations about my dedicated server: de801:/# uname -a Linux de801.ispfr.net 2.6.18-028stab085.5 #1 SMP Thu Apr 14 15:06:33 MSD 2011 x86_64 GNU/Linux de801:/# java -version java version "1.6.0_26" Java(TM) SE Runtime Environment (build 1.6.0_26-b03) Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode) de801:/# ldd $(which java) linux-vdso.so.1 => (0x00007fffbc3fd000) libpthread.so.0 => /lib/libpthread.so.0 (0x00002af013225000) libjli.so => /usr/lib/jvm/java-6-sun-1.6.0.26/jre/bin/../lib/amd64/jli/libjli.so (0x00002af013441000) libdl.so.2 => /lib/libdl.so.2 (0x00002af01354b000) libc.so.6 => /lib/libc.so.6 (0x00002af013750000) /lib64/ld-linux-x86-64.so.2 (0x00002af013008000) de801:/# cat /proc/sys/kernel/threads-max 1589248 de801:/# ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 794624 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 10240 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 128 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited Here is the output of the C program de801:/test# ./thread-limit Creating threads ... Address of c = 1061520 KB Address of c = 1081300 KB Address of c = 1080904 KB Address of c = 1081168 KB Address of c = 1080508 KB Address of c = 1080640 KB Address of c = 1081432 KB Address of c = 1081036 KB Address of c = 1080772 KB 100 threads so far ... 200 threads so far ... 300 threads so far ... 400 threads so far ... 500 threads so far ... 600 threads so far ... Failed with return code 12 creating thread 637. Any ideas how to fix this please ?

    Read the article

< Previous Page | 741 742 743 744 745 746 747 748 749 750 751 752  | Next Page >