Search Results

Search found 33341 results on 1334 pages for 'script errors'.

Page 47/1334 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Kernel Errors in logwatch

    - by Vince Pettit
    We have a dedicated server running CentOS and Plesk. We've had the following show up on our logwatch and wondered if it is anything we should worry about? --------------------- Kernel Begin ------------------------ WARNING: Kernel Errors Present Northbridge Error, node 1K8 ECC ...: 1 Time(s) ---------------------- Kernel End ------------------------- We've contacted the support team that we rent our server from but they don't seem to want to help us out without us paying their support team a fixed charge and even then they can't guarantee they would be able to find a solution to any potential problems. Full log lines regarding Kernel error... Jun 16 19:45:25 server88-208-217-241 kernel: Northbridge Error, node 1<0>K8 ECC error. Jun 16 19:45:25 server88-208-217-241 kernel: EDAC amd64 MC1: CE ERROR_ADDRESS= 0x2a3d553e0 Jun 16 19:45:25 server88-208-217-241 kernel: EDAC MC1: CE page 0x2a3d55, offset 0x3e0, grain 0, syndrome 0x5041, row 3, channel 0, label "": amd64_edac

    Read the article

  • Standalone WLST for both WebLogic 8.1 and 9.2?

    - by imiric
    Hi, I'm writing a simple script to facilitate changing JDBC connection URLs in several WL environments, among these both v8.1 and v9.2. I want to create a standalone script, outside of any WL installation, just including wlst.jar/jython.jar/weblogic.jar, that will work both on WL 8.1 and 9.2 (obviously by referencing different MBeans). Now, this works OK for WL 8.1. I copy weblogic.jar from the server, and have managed to get ahold of both wlst.jar and jython.jar (wasn't easy, Oracle doesn't host them anymore). Also I need to make sure to locally run under the same JRE as the server (WL8.1 runs on Java 1.4.2). But if I try to connect to WL 9.2 from this setup, I get a NullPointerException when trying to access any MBean (probably because I'm running on JRE 1.4.2 and WL 9.2 uses 1.5.0). Also, I am unable to create a standalone environment for WL 9.2. If I copy weblogic.jar from 9.2 and run WLST like so: java -cp "wlst.jar:jython.jar:weblogic-92.jar" weblogic.WLST I get a java.lang.NoClassDefFoundError: weblogic/management/configuration/RepositoryMBean error. I can't find this class in weblogic92/server/lib, but it IS inside weblogic.jar from WL 8.1. So I'm really losing my patience here... Is there any way to create a standalone WLST client that can connect to any version of WebLogic (8.1 & 9.2 in the meantime)? I really wouldn't want to have to ssh into the WL environment to run my WLST script... Any ideas/suggestions are welcome. Thanks, Ivan

    Read the article

  • KeepLevelReg settings to eliminate sync prompts - errors occurred while Windows was synchronizing your data

    - by Detritus Maximus
    We have 2 XP pro VMs (Citrix) that both have problems with logout prompts appearing during logout. Users are closing the rdc before these appear: The Microsoft solution involves the creation of the KeepProgressLevel registry entry along with a value of 1 for "pause on errors." I have implemented this across the domain for this problem, yet these 2 VM's continue to have the prompts. Today, I experimented by changing the KeepProgressLevel option to 0. This is not one of the options given by MS, yet I stopped getting the prompts. Can anyone tell me what I've done by setting the value to 0? Have I basically turned off the feature as if the KeepProgressLevel entry is gone? If so, why no more prompts? I did notice during logoff that there is a red x and error message, yet no prompt.

    Read the article

  • Group policy to disable notifications of particular errors?

    - by resolver101
    How do i disable the notifications of particular errors? A little background to my issue. During the installation of the Kaspersky, it disables all the windows firewall for all except the domain. I have remedied this by creating an offline policy in Kaspersky which enables the Kaspersky firewall when out of the office (ie not connected to the office network). The problem now is that users in the office now see a notification showing that the firewall is disabled even though it’s enabled in all scenarios. It’s just that work and home show as disabled when the clients are connected to the office LAN. I’ve looked into the notifications and you can disable the notification (not recommended) but I don’t want to do this incase other relevant messages are stopped from being displayed. http://blogs.technet.com/b/networking/archive/2010/12/16/disabling-firewall-alerts-in-the-action-center.aspx

    Read the article

  • In a Shell scripts, check version of installed package, make a decision based on output

    - by DJDarkViper
    Looking to write a cross distro / cross version shell script that makes sure a forced version of PHP is installed Example: Ubuntu 12.04 has 5.3, Ubuntu 13.10 has 5.5, Debian 7 has 5.4 I need this script, when run on a distro that has an old version of PHP, to update the repo to point to a package for 5.4, and if the distro has too new of a version, can downgrade to 5.4 appropriately. Im still not entirely comprehensive of Shell/Terminals technical limit of what you can do with it, but ill be perfectly frank that im still not totally used to existing tools The best I can think at the moment is: php -v | grep "PHP 5" but that returns a bunch of potentially changeable granular characters (PHP 5.4.4-14+deb7u5 (cli) (built: Oct 3 2013 09:24:58) ). Im not sure what to pipe to after this to extract out the characters im interested in Im not sure if im being totally clear, im not sure how to ask this.. Basically, in an automated shell script for Linux distros, how do I extract the PHP version (and just the PHP version number preferably) and make a decision based on that output EDIT This line ended up doing pretty dang good php -v | grep "PHP 5" | sed 's/.*PHP \([^-]*\).*/\1/' | cut -c 1-3 Bit long in the tooth, but gives me "5.3", "5.4", and "5.5" which is exactly what I need to work with

    Read the article

  • Compiling Ubuntu server: "libQtGui.so: undefined reference to png functions" errors

    - by Kowalikus
    I want to compile wkhtmltopdf on Ubuntu Server, but I have a problem with following errors: /usr/lib/libQtGui.so: undefined reference to `png_read_info@PNG12_0' /usr/lib/libQtGui.so: undefined reference to `png_set_gAMA@PNG12_0' /usr/lib/libQtGui.so: undefined reference to `png_set_PLTE@PNG12_0' ... /usr/lib/libQtGui.so: undefined reference to `png_create_info_struct@PNG12_0' /usr/lib/libQtGui.so: undefined reference to `png_set_bgr@PNG12_0' /usr/lib/libQtGui.so: undefined reference to `png_get_valid@PNG12_0' What can I do? in /usr/lib lrwxrwxrwx 1 17 2010-02-17 15:00 libQtGui.so -> libQtGui.so.4.5.2 lrwxrwxrwx 1 17 2010-02-17 14:59 libQtGui.so.4 -> libQtGui.so.4.5.2 lrwxrwxrwx 1 17 2010-02-17 14:59 libQtGui.so.4.5 -> libQtGui.so.4.5.2 -rw-r--r-- 1 10071604 2009-10-14 23:34 libQtGui.so.4.5.2

    Read the article

  • Clone MySQL DB - errors with CREATE VIEW/SHOW VIEW privileges

    - by user43537
    Running MySQL 5.0.32 on Debian 4.0 (Etch). I'm trying to clone a WordPress MySQL database completely (structure and data) on the same server. I tried a dump to an .sql file and an import into a new empty database from the command line, but the import fails with errors saying the user does not have the "SHOW VIEW" or "CREATE VIEW" privilege. Trying it with PHPMyAdmin doesn't work either. I also tried doing this with the MySQL root user (not named "root" though) and it shows an "Access Denied" error. I'm terribly confused as to where the problem is. Any pointers on cloning a MySQL DB and granting all privileges to a user account would be great (specifically for MySQL 5.0.32). Thanks!

    Read the article

  • trigger script on postfix delivery errors

    - by edovino
    I'm trying to get postfix to run a script on soft (4xx) and hard (5xx) delivery errors, but I'm not sure where to start. If I understand things correctly, I could insert (pipe-based) filters in the master.cf file, there's a whole 'milter' infrastructure available, an finally I suppose I could simply grep through the mail.info logs. So - any advice? Should I go the 'handle it via master.cf' route, and if so, what daemon should I intercept? 'bounce'? The grep-the-logs route is probably simplest, but I can't help but feel that there is a better way. Any advice appreciated!

    Read the article

  • Lost Page Write I/O Errors on CentOS LVM setup

    - by Gregg Leventhal
    I have a CentOS 6 box with LVM setup and one of the PVs is a USB disk (I know). One of them is getting the error: Oct 30 10:57:07 alpha01 kernel: lost page write due to I/O error on dm-3 Oct 30 10:57:07 alpha01 kernel: Buffer I/O error on device dm-3, logical block 4 Which is causing problems with all of the LVs on it. pvs shows the PV as unknown device. I can ls to the logical volumes and they show up in lvdisplay, but first I get a bunch of IO errors. I made sure the cables are secure between the USB drive. What should I do to get this back up and running for the meanwhile? Should I unmount each LV and run an fsck.ext4 on each one like fsck.ext4 -y /dev/vg1/lv_logvolname ?

    Read the article

  • SSH to remote host (edgemarc 4200 or 4500 series routers) and pull arp data

    - by MaQleod
    I've been trying to think of a method to do this for days, but have not come up with anything yet. Ideally, this is what I'm looking to do: From a windows XP machine, I need to open an SSH connection to a remote host, send the arp command, and pull the text results of the command back for use on the client. I will need to parse this data and preferably produce a 2D array of IPs and MAC addresses. There will be no shared keys, this is all done with a username and password that will always be different, they will need to be fed into the command via variables that will be pulled from a database using an autoit script based on the WAN ip of the remote host. Now the actual parsing of the data and creation of the array will be easy if I can just get the text of the arp table. Is there any way to ssh to a remote host, run a command and return the data from that command to the client in a batch script or perl script (it is ok if it writes the text to a file, I can read it out of the file later, I just need it to get to the client)?

    Read the article

  • Configure akamai to ignore favicon errors [on hold]

    - by Aki
    We have hosted our services through akamai and have configured and alert in akamai to notify us of 404 errors. We dont want to serve favicon from our services (as they are rest webservices which are not consumed by humans, hence no point in serving favicons). But whenever thesewebservices are accessed from a browser the browser would send a request for the favicon, which ends up being logged as a 404 and akamai sends us an alert for this. Is there a way to configure akamai in a way that it understands that favicon 404s should not contribute to the alert?

    Read the article

  • Nginx Tornado Combination Causing 502 Bad Gateway Errors

    - by PlaidFan
    We are facing a problem with inconsistent 502 errors and tracking down the reasons has been a very frustrating exercise. We can reproduce the problem by sending several simultaneous requests quickly. The problem is that several is only in the range of 10 to 20 within a 5 seconds (not a typo). So clearly this type of load should be handled easily. We really like the Nginx + Tornado approach but are considering going to a more traditional (e.g. threading) approach because this problem has been very difficult to solve. I was wondering if you a) know how to fix this issue and b) how we can tracked down the culprit(s). The log files simply identify there being a connection refused. We have the same problem as this post: How do I debug a HTTP 502 error? But there is no answer provided on how to solve the problem so I'm hoping you can help because this may be a common issue with this type of setup. Thanks in advance, Paul

    Read the article

  • Intermittent HTTP 401 errors

    - by forthrin
    I am using an Intranet solution which requires basic HTTP login. However, there is an intermittent error which requires me to log in again, and then the server says "Forbidden" whether I give the correct login information or not. To add insult to injury, Safari (and Chrome) seems to show the login dialog for every included resource in the HTML, and it's impossible to cancel this modal dialog sequence, so the whole browser is blocked until I've pressed Esc some 30 odd times. After an hour, I may gain access again, without having really done anything. My questions: What could cause temporal 401 errors? Why do the browsers show the login dialog 30 times per page load (assumedly for every included resource in the HTML from the same domain)?

    Read the article

  • IIS 7.5 with PHP 5.3, displaying errors on page

    - by dreamlax
    I'm running Windows Server 2008 R2, with IIS 7.5 and PHP 5.3 (configured by FastCGI). In my php.ini I have: log_errors = On display_errors = Off error_log = syslog (also tried an actual file with appropriate permissions) Each time a page contains an error, it is never logged anywhere, but it is displayed on the page (unless I turn log_errors off). I'm guessing that the stderr from php-cgi.exe is being put on the page, instead of being logged where it is supposed to be. Is there a setting somewhere that allows me to log these errors properly?

    Read the article

  • IIS + PHP + Page with lots of images = Intermittent 403 errors

    - by samJL
    I am using an up-to-date Server 2008 R2 Datacenter, running IIS 7.5 and PHP 5.3.6/FastCGI On PHP pages with lots of images (60+), some of the images fail to load It is not always the same images-- on each page refresh an image that worked previously may not load, while an image that did not now does Looking at the Net tab in Firebug reveals that the failing image requests are 403 errors All of the images are located on the server in question, and the images directory has the correct permissions I believe this problem is the result of a limit on requests All of my attempts at researching this problem point to maxConnections setting in IIS, yet mine is set at the highest/default of 4294967295 (maxBandwidth too) I am also running a ColdFusion site on the same IIS installation, and it does not suffer from 403's on pages with lots of images I am left thinking that there is another connection limit (in PHP or FastCGI?) overriding the IIS connection limit I don't see anything that looks like a request limit in the php.ini, what am I missing? Any help would be appreciated, thank you

    Read the article

  • How to avoid duplicates when copying files that have been renamed at the destination

    - by Benoitt
    I have to get pictures from a folder – with subfolders which are updated automatically – with their extensions. These files have to be copied in a folder where a website based on PHP will edit them (by renaming and creating an XML file) to be downloadable and integrated in an XML feed. Because of the rename function of the script, when I perform the copy gain, all the files are duplicated, because the script has renamed the original ones already. I've tried a few things with rsync but I'm looking for something more powerful because I can't copy files with an external "history". #!/bin/bash find '/home/name/picture' -name '*.jpg' | while read FILE ; do rsync --backup --backup-dir=incremental --suffix=.old "$FILE" /var/www/media ; done wget --spider 'http://myscript.php' ; #exit 0 PS: As a little addition, I'd like to replace '.' with a 'space' just after the *.jpeg copy. My PHP script has some problem to define files with comma because of the extension. I'm finking about a command with find – like I did before – with a sed function? Is that a good idea?

    Read the article

  • SUSE linux nginx and phpmyadmin 404 errors

    - by user968808
    I've installed nginx and phpmyadmin etc and most things work fine. I'm getting random problems though... e.g. when I click drop I get a message on the screen to say its updating and then the page does not refresh afterwards. Also randomly get 404 errors if I check all when trying to delete rows out of a database. Or if I import a file over around 10-20 rows i get 404 page not found. Where have I gone wrong?

    Read the article

  • WD: UDMA CRC Errors and Reallocated Sector

    - by Leo White
    I got a WD Caviar Black 1TB (WD1001FALS) and according to SMART, I got: one "Reallocated Sector" one "Reallocated Event" 26 UDMA CRC Errors in my drive but it's a "Pass" for the "SMART overall-health self assessment test". I think it's because of these, I'm having problems with Grub, and thus can't boot into any OS at all. Are these problems serious? According to "Warranty Services", my drive is still "In Limited Warranty". Would I be eligible for a replacement? FYI: I'm running Ubuntu 11.10. Any help will be appreciated. Thanks

    Read the article

  • Drive still usable if Seatools reports errors?

    - by Rob
    I have a Seagate 3TB Expansion Desktop drive that was part of a Linux RAID 6 array that failed. I eventually did a zero fill both through Seagate DiscWizard and via Linux dd, neither reported errors. When I ran Seatools now, I got: Short DST - Started 5/31/2014 10:04:36 PM Short DST - Pass 5/31/2014 10:05:37 PM Long Generic - Started 5/31/2014 10:15:19 PM Bad LBA: 518242762 Not Repaired (whole bunch of bad LBAs ommited) Bad LBA: 518715255 Not Repaired Long Generic Aborted 6/1/2014 3:12:18 AM i.e. the short test passed, the long test failed. Unfortunately, the drive is out of warranty, so I can't just RMA it. But I hate tossing a drive that can still be used. So, my questions are: If the zero fill succeeded, and the short test passed, can I still use the whole drive? if not, since I'm using LVM on top of RAID, is there a way to tell either of these to just skip the bad area? If not the above, can I just create partitions before and after the part of the drive with the bad LBAs?

    Read the article

  • random hard disk errors

    - by AugB
    For the past 2 years or so (4 year old custom build) I've been getting random moments where everything stops responding (or takes a very long time to respond) followed by I/O and hdd not detected errors on restart. To fix it, all I usually need to do is unplug my SATA cables from the hdd and mobo and plug them back in again and the problem disappears, at least for a little while (it ranges anywhere from a day to a few months time). Sometimes even a startup repair does the job. I've done multiple reformats and have also ran chkdsk more times than I can remember and both do not seem to help in the long run. Both the drives seem to be exhibiting the same problem. Have both my hdds been "dying" for the past couple of years, even though they are fully functional besides these occasional hiccups? Does the issue lie elsewhere? All feedback is appreciated. System specs: Biostar Tpower i45 mobo 2x WD Caviar 640GB hdds Zalman 750w psu Radeon 5870 gpu 2x2gb Gskill DDR2 ram Win7 64

    Read the article

  • MySql in Bash: Show only errors

    - by TRWTFCoder
    Let me first start off by saying I am not an experienced linux user. I am trying to debug a mysql script in linux, however, my issue is that most of the queries are successful so I can not see the error messages because they scroll off the screen. I am executing the queries from a large file using the \. command. I was wondering if there was a way to show ONLY the error messages when I exececute the sql file. Right now it is showing both error messages and Query OK,.... I don't really care about the queries that are ok, just the errors. Thanks!

    Read the article

  • Transient mysqlcheck errors about "size of datafile" (file too small)

    - by Adam Backstrom
    Running mysqlcheck on a live database is giving me transient errors like this one: mydatabase.mytable error : Size of datafile is: 500719688 Should be: 501000484 error : Corrupt When I run the command again or check the table one-off using mysql, it's listed as OK. Is this just a side effect of running checks on live tables? Is it possible that data is not flushed, hence the strange discrepancy? We moved several databases this morning by shutting down mysqld on the source and rsyncing files across to the new server, but these are all MyISAM tables so I don't believe the two things are related. (But I mention it just in case.)

    Read the article

  • PowerShell Script to Deploy Multiple VM on Azure in Parallel #azure #powershell

    - by Marco Russo (SQLBI)
    This blog is usually dedicated to Business Intelligence and SQL Server, but I didn’t found easily on the web simple PowerShell scripts to help me deploying a number of virtual machines on Azure that I use for testing and development. Since I need to deploy, start, stop and remove many virtual machines created from a common image I created (you know, Tabular is not part of the standard images provided by Microsoft…), I wanted to minimize the time required to execute every operation from my Windows Azure PowerShell console (but I suggest you using Windows PowerShell ISE), so I also wanted to fire the commands as soon as possible in parallel, without losing the result in the console. In order to execute multiple commands in parallel, I used the Start-Job cmdlet, and using Get-Job and Receive-Job I wait for job completion and display the messages generated during background command execution. This technique allows me to reduce execution time when I have to deploy, start, stop or remove virtual machines. Please note that a few operations on Azure acquire an exclusive lock and cannot be really executed in parallel, but only one part of their execution time is subject to this lock. Thus, you obtain a better response time also in these scenarios (this is the case of the provisioning of a new VM). Finally, when you remove the VMs you still have the disk containing the virtual machine to remove. This cannot be done just after the VM removal, because you have to wait that the removal operation is completed on Azure. So I wrote a script that you have to run a few minutes after VMs removal and delete disks (and VHD) no longer related to a VM. I just check that the disk were associated to the original image name used to provision the VMs (so I don’t remove other disks deployed by other batches that I might want to preserve). These examples are specific for my scenario, if you need more complex configurations you have to change and adapt the code. But if your need is to create multiple instances of the same VM running in a workgroup, these scripts should be good enough. I prepared the following PowerShell scripts: ProvisionVMs: Provision many VMs in parallel starting from the same image. It creates one service for each VM. RemoveVMs: Remove all the VMs in parallel – it also remove the service created for the VM StartVMs: Starts all the VMs in parallel StopVMs: Stops all the VMs in parallel RemoveOrphanDisks: Remove all the disks no longer used by any VMs. Run this script a few minutes after RemoveVMs script. ProvisionVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   # Name of storage account (where VMs will be deployed) $StorageAccount = "Copy the Label property you get from Get-AzureStorageAccount"   function ProvisionVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName) $Location = "Copy the Location property you get from Get-AzureStorageAccount" $InstanceSize = "A5" # You can use any other instance, such as Large, A6, and so on $AdminUsername = "UserName" # Write the name of the administrator account in the new VM $Password = "Password"      # Write the password of the administrator account in the new VM $Image = "Copy the ImageName property you get from Get-AzureVMImage" # You can list your own images using the following command: # Get-AzureVMImage | Where-Object {$_.PublisherName -eq "User" }         New-AzureVMConfig -Name $VmName -ImageName $Image -InstanceSize $InstanceSize |             Add-AzureProvisioningConfig -Windows -Password $Password -AdminUsername $AdminUsername|             New-AzureVM -Location $Location -ServiceName "$VmName" -Verbose     } }   # Set the proper storage - you might remove this line if you have only one storage in the subscription Set-AzureSubscription -SubscriptionName $SubscriptionName -CurrentStorageAccount $StorageAccount   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list provisions one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed ProvisionVM "test10" ProvisionVM "test11" ProvisionVM "test12" ProvisionVM "test13" ProvisionVM "test14" ProvisionVM "test15" ProvisionVM "test16" ProvisionVM "test17" ProvisionVM "test18" ProvisionVM "test19" ProvisionVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup of jobs Remove-Job *   # Displays batch completed echo "Provisioning VM Completed" RemoveVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function RemoveVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Remove-AzureService -ServiceName $VmName -Force -Verbose     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list remove one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed RemoveVM "test10" RemoveVM "test11" RemoveVM "test12" RemoveVM "test13" RemoveVM "test14" RemoveVM "test15" RemoveVM "test16" RemoveVM "test17" RemoveVM "test18" RemoveVM "test19" RemoveVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Remove VM Completed" StartVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function StartVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Start-AzureVM -Name $VmName -ServiceName $VmName -Verbose     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list starts one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed StartVM "test10" StartVM "test11" StartVM "test11" StartVM "test12" StartVM "test13" StartVM "test14" StartVM "test15" StartVM "test16" StartVM "test17" StartVM "test18" StartVM "test19" StartVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Start VM Completed"   StopVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function StopVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Stop-AzureVM -Name $VmName -ServiceName $VmName -Verbose -Force     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list stops one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed StopVM "test10" StopVM "test11" StopVM "test12" StopVM "test13" StopVM "test14" StopVM "test15" StopVM "test16" StopVM "test17" StopVM "test18" StopVM "test19" StopVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Stop VM Completed" RemoveOrphanDisks $Image = "Copy the ImageName property you get from Get-AzureVMImage" # You can list your own images using the following command: # Get-AzureVMImage | Where-Object {$_.PublisherName -eq "User" }   # Remove all orphan disks coming from the image specified in $ImageName Get-AzureDisk |     Where-Object {$_.attachedto -eq $null -and $_.SourceImageName -eq $ImageName} |     Remove-AzureDisk -DeleteVHD -Verbose  

    Read the article

  • subprocess installed post-installation script returned error exit code 1

    - by Laura quintero
    I had installed snort on ubuntu 11.04 and uninstall it because I had problems, to reinstall it leaves a problem: Reading package lists ... done Building dependency tree Reading state information ... done Calculating upgrade ... ready 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. 0 B will be used for additional disk space after this operation. Do you want to continue [S / n]? s Configuring snort (2.8.5.2-9.1) ... * Stopping Network Intrusion Detection System snort * - No running snort instance found * Starting Network Intrusion Detection System snort [fail] invoke-rc.d: initscript snort, action "start" failed. dpkg: error processing snort (- configure): subprocess installed post-installation script returned error exit code 1 Errors were encountered while processing: snort E: Sub-process / usr / bin / dpkg Returned an error code (1) any solution? Commands allready used apt-get clean apt-get remove snort sudo apt-get dist-upgrade dpkg - remove - force-remove-reinstreq snort and nothing.

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >