Search Results

Search found 38328 results on 1534 pages for 'write xml'.

Page 401/1534 | < Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >

  • email output of powershell script

    - by Gordon Carlisle
    I found this wonderful script that outputs the status of the current DFS backlog to the powershell console. This works great, but I need the script to email me so I can schedule it to run nightly. I have tried using the Send-MailMessage command, but can't get it to work. Mainly because my powershell skills are very weak. I believe most of the issue revolve around the script using the Write-Host command. While the coloring is nice I would much rather have it email me the results. I also need the solution to be able to specify a mail server since the dfs servers don't have email capability. Any help or tips are welcome and appreciated. Here is the code. $RGroups = Get-WmiObject -Namespace "root\MicrosoftDFS" -Query "SELECT * FROM DfsrReplicationGroupConfig" $ComputerName=$env:ComputerName $Succ=0 $Warn=0 $Err=0 foreach ($Group in $RGroups) { $RGFoldersWMIQ = "SELECT * FROM DfsrReplicatedFolderConfig WHERE ReplicationGroupGUID='" + $Group.ReplicationGroupGUID + "'" $RGFolders = Get-WmiObject -Namespace "root\MicrosoftDFS" -Query $RGFoldersWMIQ $RGConnectionsWMIQ = "SELECT * FROM DfsrConnectionConfig WHERE ReplicationGroupGUID='"+ $Group.ReplicationGroupGUID + "'" $RGConnections = Get-WmiObject -Namespace "root\MicrosoftDFS" -Query $RGConnectionsWMIQ foreach ($Connection in $RGConnections) { $ConnectionName = $Connection.PartnerName.Trim() if ($Connection.Enabled -eq $True) { if (((New-Object System.Net.NetworkInformation.ping).send("$ConnectionName")).Status -eq "Success") { foreach ($Folder in $RGFolders) { $RGName = $Group.ReplicationGroupName $RFName = $Folder.ReplicatedFolderName if ($Connection.Inbound -eq $True) { $SendingMember = $ConnectionName $ReceivingMember = $ComputerName $Direction="inbound" } else { $SendingMember = $ComputerName $ReceivingMember = $ConnectionName $Direction="outbound" } $BLCommand = "dfsrdiag Backlog /RGName:'" + $RGName + "' /RFName:'" + $RFName + "' /SendingMember:" + $SendingMember + " /ReceivingMember:" + $ReceivingMember $Backlog = Invoke-Expression -Command $BLCommand $BackLogFilecount = 0 foreach ($item in $Backlog) { if ($item -ilike "*Backlog File count*") { $BacklogFileCount = [int]$Item.Split(":")[1].Trim() } } if ($BacklogFileCount -eq 0) { $Color="white" $Succ=$Succ+1 } elseif ($BacklogFilecount -lt 10) { $Color="yellow" $Warn=$Warn+1 } else { $Color="red" $Err=$Err+1 } Write-Host "$BacklogFileCount files in backlog $SendingMember->$ReceivingMember for $RGName" -fore $Color } # Closing iterate through all folders } # Closing If replies to ping } # Closing If Connection enabled } # Closing iteration through all connections } # Closing iteration through all groups Write-Host "$Succ successful, $Warn warnings and $Err errors from $($Succ+$Warn+$Err) replications." Thanks, Gordon

    Read the article

  • Is it possible to shrink the size of an HP Smart Array logical drive?

    - by ewwhite
    I know extension is quite possible using the hpacucli utility, but is there an easy way to reduce the size of an existing logical drive (not array)? The controller is a P410i in a ProLiant DL360 G6 server. I'd like to reduce logicaldrive 1 from 72GB to 40GB. => ctrl all show config detail Smart Array P410i in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Serial Number: 5001438006FD9A50 Cache Serial Number: PAAVP9VYFB8Y RAID 6 (ADG) Status: Disabled Controller Status: OK Chassis Slot: Hardware Revision: Rev C Firmware Version: 3.66 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 3 secs Surface Scan Mode: Idle Queue Depth: Automatic Monitor and Performance Delay: 60 min Elevator Sort: Enabled Degraded Performance Optimization: Disabled Inconsistency Repair Policy: Disabled Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 15 secs Cache Board Present: True Cache Status: OK Accelerator Ratio: 25% Read / 75% Write Drive Write Cache: Enabled Total Cache Size: 512 MB No-Battery Write Cache: Disabled Cache Backup Power Source: Batteries Battery/Capacitor Count: 1 Battery/Capacitor Status: OK SATA NCQ Supported: True Array: A Interface Type: SAS Unused Space: 412476 MB Status: OK Logical Drive: 1 Size: 72.0 GB Fault Tolerance: RAID 1+0 Heads: 255 Sectors Per Track: 32 Cylinders: 18504 Strip Size: 256 KB Status: OK Array Accelerator: Enabled Unique Identifier: 600508B1001C132E4BBDFAA6DAD13DA3 Disk Name: /dev/cciss/c0d0 Mount Points: /boot 196 MB, / 12.0 GB, /usr 8.0 GB, /var 4.0 GB, /tmp 2.0 GB OS Status: LOCKED Logical Drive Label: AE438D6A5001438006FD9A50BE0A Mirror Group 0: physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 146 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 146 GB, OK) Mirror Group 1: physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SAS, 146 GB, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SAS, 146 GB, OK) SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 250 Device Number: 250 Firmware Version: RevC WWID: 5001438006FD9A5F Vendor ID: PMCSIERA Model: SRC 8x6G

    Read the article

  • Drupal install and permissions

    - by Richard
    So I'm really stuck on this issue. An install process is complaining about write permission on settings.php and sites/default/files/. However, I've moved these files temporarily to write/read (chmod 777) and changed the owner/group to "apache" as shown below. -bash-4.1$ ls -hal total 28K drwxrwxrwx. 3 richard richard 4.0K Aug 23 15:03 . drwxr-xr-x. 4 richard richard 4.0K Aug 18 14:20 .. -rwxrwxrwx. 1 apache apache 9.3K Mar 23 16:34 default.settings.php drwxrwxrwx. 2 apache apache 4.0K Aug 23 15:03 files -rwxrwxrwx. 1 apache apache 0 Aug 23 15:03 settings.php However, the install is still complaining about write permissions. I followed steps one and two of the INSTALL.txt but no luck. Update: To further explore the situation, I created sites/default/richard.php with the following code: <?php error_reporting(E_ALL); ini_set('display_errors', '1'); mkdir('files'); print("<hr> User is "); passthru("whoami"); passthru("pwd"); ?> Run from the command line (under user "richard"), no problem. The folder is created everything is a go. Run from the web, I get the following: Warning: mkdir(): Permission denied in /var/www/html/sites/default/richard.php on line 9 User is apache /var/www/html/sites/default Update 2: Safe mode appears to be off... -bash-4.1$ cat /etc/php.ini | grep safe | grep mode | grep -v \; safe_mode = Off safe_mode_gid = Off safe_mode_include_dir = safe_mode_exec_dir = safe_mode_allowed_env_vars = PHP_ safe_mode_protected_env_vars = LD_LIBRARY_PATH sql.safe_mode = Off

    Read the article

  • Writing a script for ash?

    - by rumtscho
    My VPN is behaving funny sometimes, and I have to restart it often. I wanted to write a script which does that for me. It doesn't have to be anything fancy, just a shortcut for the commands I have to type into the terminal. More specifically: it will look at the running processes. If it finds a running vpnc process, it will kill it. Then it will start vpnc. I've written bash scripts of similar complexity, but now I don't have a bash, only an ash. Until now, the only difference I noticed is that there are much less commands available, but then, I don't use it very often. So I have some questions. Is writing ash scripts different than writing bash scripts? Is there something specific to consider when doing it? When the script is ready, how can I deploy it? For bash, I just put the executable file under /usr/lib and run it by typing the file name into the command line, will this work with ash? Are there any special pitfalls to watch out for in the script I want to write? I think that the killing process part may get hairy, if I write something that kills the wrong process, but even then running the script shouldn't break anything permanently, right?

    Read the article

  • jboss 5.1 mysql connection pooling

    - by boyd4715
    I am using JBOSS 5.1.0.GA, MySQL 5.5 and Hibernate 3.3.1 GA (included with JBOSS) + Spring. My question is do I need to add c3p0 as a data source in my spring/hibernate configuration for connection pooling or are the setting in the JBOSS mysql-ds.xml setting enough. My mysql-ds.xml is the following: <datasources> <local-tx-datasource> <jndi-name>MySqlDS</jndi-name> <connection-url>jdbc:mysql://localhost:3306/ecotrak</connection-url> <driver-class>com.mysql.jdbc.Driver</driver-class> <user-name>ecotrak</user-name> <password>ecotrak</password> <min-pool-size>5</min-pool-size> <max-pool-size>20</max-pool-size> <idle-timeout-minutes>5</idle-timeout-minutes> <exception-sorter-class-name>org.jboss.resource.adapter.jdbc.vendor.MySQLExceptionSorter</exception-sorter-class-name> <!-- should only be used on drivers after 3.22.1 with "ping" support --> <valid-connection-checker-class-name>org.jboss.resource.adapter.jdbc.vendor.MySQLValidConnectionChecker</valid-connection-checker-class-name> <!-- sql to call when connection is created <new-connection-sql>some arbitrary sql</new-connection-sql> --> <!-- sql to call on an existing pooled connection when it is obtained from pool - MySQLValidConnectionChecker is preferred for newer drivers <check-valid-connection-sql>some arbitrary sql</check-valid-connection-sql> --> <!-- corresponding type-mapping in the standardjbosscmp-jdbc.xml (optional) --> <metadata> <type-mapping>mySQL</type-mapping> </metadata> </local-tx-datasource> </datasources>

    Read the article

  • GlassFish cluster-targeted jdbc is not enabled

    - by Jin Kwon
    I have a GlassFish cluster. When I tried to add node and a instance, DAS saids a bunch of error messages telling Resource [ jdbc/xxxx ] of type [ jdbc ] is not enabled [#|2012-11-14T12:07:04.318+0900|SEVERE|glassfish3.1.2|javax.enterprise.system.core.com.sun.enterprise.v3.server|_ThreadID=2803;_ThreadName=Thread-2;|java.lang.StackOverflowError at java.io.FileOutputStream.writeBytes(Native Method) at java.io.FileOutputStream.write(FileOutputStream.java:318) at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122) at java.io.PrintStream.write(PrintStream.java:480) at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221) at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291) at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:295) at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141) at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229) at java.util.logging.StreamHandler.flush(StreamHandler.java:242) at java.util.logging.ConsoleHandler.publish(ConsoleHandler.java:106) at java.util.logging.Logger.log(Logger.java:522) at com.sun.logging.LogDomains$1.log(LogDomains.java:372) at java.util.logging.Logger.doLog(Logger.java:543) at java.util.logging.Logger.log(Logger.java:607) at com.sun.enterprise.resource.deployer.JdbcResourceDeployer.deployResource(JdbcResourceDeployer.java:117) at org.glassfish.javaee.services.ResourceProxy.create(ResourceProxy.java:90) at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:507) at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:455) at javax.naming.InitialContext.lookup(InitialContext.java:411) at javax.naming.InitialContext.lookup(InitialContext.java:411) at com.sun.appserv.connectors.internal.api.ResourceNamingService.lookup(ResourceNamingService.java:221) the JDBC Resource is ok and targeted with the cluster. I've installed the JDBC driver on the new node. Can anybody help?

    Read the article

  • Installing Linux on an Asus p8z68-m PRO Motherboard

    - by Holland
    Here is a challenge: how is this done? I've tried disabling the ASM1061 controller in the Onboard Devices section, using Wubi, booting from USB (as I don't have a DVD drive, yet), and even booting from RAID/IDE (with AHCI as the default) to do this. Still, no dice. Google shows up virtually nothing related about Linux and this mobo, apart from a people just saying "disable ASMedia" (which, I assume is the ASM1061 controller, as that's all I see - apart from the USB 3.0, which I disabled already) and it hasn't really helped much. Thus, what is wrong here? Edit My problem is that I cannot boot Linux via USB or a simple Windows installer such as Wubi (for Ubuntu). I wind up getting error messages along the lines of write cache failed, along with many other cryptic error messages similar to the following: [ 1400.351374] sd 4:0:0:0: [sdb] Test WP failed, assume Write Enabled [ 1400.353433] sd 4:0:0:0: [sdb] Asking for cache data failed [ 1400.356601] sd 4:0:0:0: [sdb] Assuming drive cache: write through This seems to be common for Asus P8Z68-M Pro motherboards, with the only notable solution being to "disable ASMedia", which, as I said before, I'm guessing is the ASM1061 controller on the motherboard. Despite already disabling this, I have tried this with both Fedora and Ubuntu without any success. I need to know what I can do about this; has anyone ran into something similar or heard about this issue before? I know these motherboards are relatively new...

    Read the article

  • How do I deny access to everybody but me in Windows 7?

    - by GregH
    I am trying to set up a file server on my my Windows 7 Pro system at home. I set up one common "Share" folder that I have shared/published. Within the share folder I want to have individual folders for me and my wife...that is only I can read/write my folder and only my wife can read/write to her folder and neither of us can read the contents of the other person's folder. Then I want to have a "public" folder where we can both read/write to contents of the folder as well as any sub-folders created, but my "kids" account can only read from this folder and sub folders. It seems really confusing to set up something like this and it really shouldn't. I am really confused between the "allow", "deny", and dimmed check boxes in the security tab. It seems that if I "Deny" access to "Everyone" on my private folder, then I don't even have access to it. Windows security seems backwards from the rest of the world's security models. If I am in two groups and I deny access to one of the groups but allow access to the other group then Windows security denies me access as I am in one of the groups that has access disallowed. Very confusing.

    Read the article

  • Subversion all or nothing access to repo tree

    - by Glader
    I'm having some problems setting up access to my Subversion repositories on a Linux server. The problem is that I can only seem to get an all-or-nothing structure going. Either everyone gets read access to everything or noone gets read or write access to anything. The setup: SVN repos are located in /www/svn/repoA,repoB,repoC... Repositories are served by Apache, with Locations defined in etc/httpd/conf.d/subversion.conf as: <Location /svn/repoA> DAV svn SVNPath /var/www/svn/repoA AuthType Basic AuthName "svn repo" AuthUserFile /var/www/svn/svn-auth.conf AuthzSVNAccessFile /var/www/svn/svn-access.conf Require valid-user </Location> <Location /svn/repoB> DAV svn SVNPath /var/www/svn/repoB AuthType Basic AuthName "svn repo" AuthUserFile /var/www/svn/svn-auth.conf AuthzSVNAccessFile /var/www/svn/svn-access.conf Require valid-user </Location> ... svn-access.conf is set up as: [/] * = [/repoA] * = userA = rw [/repoB] * = userB = rw But checking out URL/svn/repoA as userA results in Access Forbidded. Changing it to [/] * = userA = r [/repoA] * = userA = rw [/repoB] * = userB = rw gives userA read access to ALL repositories (including repoB) but only read access to repoA! so in order for userA to get read-write access to repoB i need to add [/] userA = rw which is mental. I also tried changing Require valid-user to Require user userA for repoA in subversion.conf, but that only gave me read access to it. I need a way to default deny everyone access to every repository, giving read/write access only when explicitly defined. Can anyone tell me what I'm doing wrong here? I have spent a couple of hours testing and googling but come up empty, so now I'm doing the post of shame.

    Read the article

  • Communicating via Command Mode with IBM HS22 IMM via AMM

    - by MikeyB
    On previous model blades that contained a BMC, I was able to communicate from our external management station via pass-through commands to the BMC to do things such as power blades on/off, set VPD parameters, reboot the BMC, etc. Now on the HS22, a bunch of things happen differently. For example, we can no longer use the same pass-through commands to write VPD information pages and have them persist across reboots of the IMM - it looks as though those VPD pages are populated from information contained in the IMM. How do we use the Advanced Settings Utility from an external host to communicate with HS22 IMMs? Alternatively, what TCP Command Mode commands do we need to send to the AMM to communicate with the IMM? For our purposes, we specifically cannot communicate with the IMM from the blade itself. Specific example: When I send a pass-thru IPMI command via the AMM to the blade BMC to write information (such as MTM, Serial) into VPD page 0x10, it persists on blades with a BMC (HS21 for example). I can send the same IPMI command to write data to the VPD page on the HS22, however it does not persist across reboots of the IMM. What IPMI commands do I need to send to the IMM? What IPMI commands are asu sending when it sets the MTM & Serial?

    Read the article

  • Synchronization of volume snapshots when doing whole system backups

    - by intuited
    Is there a way to guarantee consistency across volumes when doing backups from LVM snapshots? Consider this scenario: Some system upgrade is in progress. It will write some files to the /usr volume, and once completed, will record success in the /var volume. As the upgrade is just about complete, I run a backup script that creates snapshots of the /usr and /var volumes, along with the rest of the system's volumes, and proceeds to create backups from those snapshots. Just before the upgrade's last write/flush on the /usr volume completes, the backup script takes its snapshot of /usr. That write completes, and the upgrade operation's success is quickly recorded in the nebulous depths of /var. The backup script takes a snapshot of /var. The backup script creates backups from the snapshots it has, er, snapshotted. So the result of all of this tomfoolery is that the resulting /usr backup contains a file which is missing a few bits, and the /var backup contains metadata indicating that that file is complete and approved for use. Without delving into the details of which operating systems' system upgrade systems would be unfazed by such trifles, is there a way to avoid such problems? At the least this seems like it could cause some application to fail unexpectedly after restoration of such a backup.

    Read the article

  • Tell Tomcat to drop requests instead of dying "All threads (150) are currently busy"

    - by Nicolas Raoul
    My Tomcat 6.0.26 sometimes dies saying: SEVERE: All threads (150) are currently busy, waiting. Increase maxThreads (150) or check the servlet status ... then Tomcat shuts down, and users can't access the webapp until I restart Tomcat manually. Some of the threads indeed take a long time to execute, it is by-design, not a thread-gone-wild problem. I know I could increase maxThreads, but that is not a viable solution, because the server might receive requests even more requests. QUESTION: Instead of dying, can I tell Tomcat to just drop requests when maxThreads is reached and the AJP/1.3 backlog is full? Below is my server.xml in any case: <?xml version='1.0' encoding='utf-8'?> <Server port="8005" shutdown="SHUTDOWN"> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <Listener className="org.apache.catalina.core.JasperListener" /> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <GlobalNamingResources> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <Service name="Catalina"> <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" minSpareThreads="100"/> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" enableLookups="false" useBodyEncodingForURI="true" backlog="150" maxThreads="150" executor="tomcatThreadPool" keepAliveTimeout="5000" connectionTimeout="300000" /> <Engine name="Catalina" defaultHost="localhost" jvmRoute="ecm1"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Host> </Engine> </Service> </Server>

    Read the article

  • How can I avoid an error in this .htaccess file?

    - by mipadi
    I have a blog. The blog is stored under the /blog/ prefix on my website. It has the usual URLs for a blog, so articles have URLs in the format /blog/:year/:month/:day/:title/. First and foremost, I want to automatically redirect visitors to the www subdomain (in case they leave that off), and internally rewrite the root URL to /blog/, so that the front page of the blog appears on the front page of the site. I have accomplished that with the following set of rewrite rules in my .htaccess file: RewriteEngine On # Rewrite monkey-robot.com to www.monkey-robot.com RewriteCond %{HTTP_HOST} ^monkey-robot\.com$ RewriteRule ^(.*)$ http://www.monkey-robot.com/$1 [R=301,L] RewriteRule ^$ /blog/ [L] RewriteRule ^feeds/blog/?$ /feeds/blog/atom.xml [L] That works fine. The problem is that the front page of the blog now appears at two distinct URLs: / and /blog/. So I'd like to redirect the /blog/ URL to the root URL. Initially I tried to accomplish this with the following set of rewrite rules: RewriteEngine On # Rewrite monkey-robot.com to www.monkey-robot.com RewriteCond %{HTTP_HOST} ^monkey-robot\.com$ RewriteRule ^(.*)$ http://www.monkey-robot.com/$1 [R=301,L] RewriteRule ^$ /blog/ [L] RewriteRule ^blog/?$ / [R,L] RewriteRule ^feeds/blog/?$ /feeds/blog/atom.xml [L] But that gave me an infinite redirect (maybe because of the preceding rule?). So then I tried this set: RewriteEngine On # Rewrite monkey-robot.com to www.monkey-robot.com RewriteCond %{HTTP_HOST} ^monkey-robot\.com$ RewriteRule ^(.*)$ http://www.monkey-robot.com/$1 [R=301,L] RewriteRule ^$ /blog/ [L] RewriteRule ^blog/?$ http://www.monkey-robot.com/ [R,L] RewriteRule ^feeds/blog/?$ /feeds/blog/atom.xml [L] But I got a 500 Internal Server Error with the following log message: Invalid command '[R,L]', perhaps misspelled or defined by a module not included in the server configuration What gives? I don't think [R,L] is a syntax error.

    Read the article

  • mod_deflate doesn't work [closed]

    - by kikio
    I want to gzip my static files. so put this in .htaccess: <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/text text/html text/plain text/xml text/css application/x-javascript application/javascript </IfModule> and looked for mod_deflate in phpinfo() output Loaded Modules section, and I found it. But when I track server responses with Firebug, no gzipped file can be found: HTTP/1.1 200 OK Date: Sat, 08 Sep 2012 21:41:21 GMT Last-Modified: Sat, 08 Sep 2012 21:26:04 GMT Accept-Ranges: bytes Cache-Control: max-age=604800 Expires: Sat, 15 Sep 2012 21:41:21 GMT Vary: Accept-Encoding Keep-Alive: timeout=3, max=50 Connection: Keep-Alive Content-Type: text/css Content-Length: 18206 What's the problem? I'm sure I have mod_deflate enabled (according to php apache_get_modules()). UPDATE: the request headers: GET /d/jquery-ui.css HTTP/1.1 Host: 127.0.0.1 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip, deflate DNT: 1 Connection: keep-alive Pragma: no-cache Cache-Control: no-cache

    Read the article

  • MySQL Master-Master w/ multiple read slave cost effective setup in AWS

    - by Ross
    I've been evaluating Amazon Web Services RDS for MySQL and costing out potential scenarios involving a simple multi-AZ deployment read/write setup vs. a multi-AZ deployment mysql master (hot-standby) with additional read-only slaves. the issue I'm trying to cost-optimize includes their reserved instance vs on-demand instances. Situation 1: purchase reserved multi-az setup for Extra-large-hi-mem(17GB RAM) instance for $5200/yr and have my application query the master all the time. the problem is, if I don't need all the resources of the (17GB RAM) all the time and therefore, especially not a hot-standby, what alternatives for savings can a better topology create, like potentially situation 2 below: Situation 2: purchase reserved multi-az setup using smaller master instances than above for the master-master hot-standby to receive the writes only. Then create and load balance several read-only slaves off the master and add/remove and/or scale up/down the read slaves based on demand. This might only cost $1000 + the on-demand usage of the read slaves. My thinking is, if I have a variable read-intensive application load, with low write load, the single level topology in situation 1 means I'm paying for a lot of resources at the write level of topology when I don't need them there. My hope is that situation 2 can yield cost savings from smaller reserved instances on the master-master resource level allowing me to scale up and down and/or out on the read-level according to demand as needed. Does anyone see a downside to doing this or know of some reason this isn't possible with RDS? Any other thoughts or advice always welcome of course. Thanks in advance, R

    Read the article

  • Win 2008 R2 - copying TO disk is very slow, copying FROM is more or less okay

    - by avs099
    I have Windows 2008 R2 SP1 with 4 identical SATA disks (Seagate Barracude 7200) in RAID 5 array. It has 4Gb of memory; all recent updates are installed. Problem: when I copy large file from one folder to another, I get about 10MB/s average speed. When I read this file from network share via 1Gbps connection - I get about 25-30 MB/s. Both numbers seems to be low for me - but specifically I'm very frustrated with low write speed. there is no antivirus, no hyper-v, it's just a fileserver - i when i do my tests nobody else reads/write from it (we have only 4 people in a team, so I'm sure). Not sure if that matters, but there is only 1 logic disk "C" with all available space (1400 GB). I'm not an admin at all, so I have no idea where to look and what other information to provide. I did run performance monitor with "% idle time", "avg bytes read", "avg byte write" - here is the screenshot: I'm not sure why there are such obvious spikes. Any idea? Please let me know if you need me to provide more information - what counters should I check, etc. I'm very eager to get this solved. Thank you. UPDATE: we have another Windows 2008 R2 SP1 server with 2 RAID1 arrays - one is disk C (where windows is installed, another one is disk E). It is running Hyper-V and does not have antivirus. I noticed the following behavior when I copy large file (few GBs): C - C: about 50MB/sec C - E: about 55MB/sec E - E: 8MB/sec!!! E - C: 8MB/sec!!! what could cause this?? E drive is RAID1 array from same Seagate Barracuda 1TB drives..

    Read the article

  • HTTP responses curl and wget different results

    - by Fab
    To check HTTP response header for a set of urls I send with curl the following request headers foreach ( $urls as $url ) { // Setup headers - I used the same headers from Firefox version 2.0.0.6 $header[ ] = "Accept: text/xml,application/xml,application/xhtml+xml,"; $header[ ] = "text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5"; $header[ ] = "Cache-Control: max-age=0"; $header[ ] = "Connection: keep-alive"; $header[ ] = "Keep-Alive: 300"; $header[ ] = "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7"; $header[ ] = "Accept-Language: en-us,en;q=0.5"; $header[ ] = "Pragma: "; // browsers keep this blank. curl_setopt( $ch, CURLOPT_URL, $url ); curl_setopt( $ch, CURLOPT_USERAGENT, 'Googlebot/2.1 (+http://www.google.com/bot.html)'); curl_setopt( $ch, CURLOPT_HTTPHEADER, $header); curl_setopt( $ch, CURLOPT_REFERER, 'http://www.google.com'); curl_setopt( $ch, CURLOPT_HEADER, true ); curl_setopt( $ch, CURLOPT_NOBODY, true ); curl_setopt( $ch, CURLOPT_RETURNTRANSFER, true ); curl_setopt( $ch, CURLOPT_FOLLOWLOCATION, true ); curl_setopt( $ch, CURLOPT_HTTPAUTH, CURLAUTH_ANY ); curl_setopt( $ch, CURLOPT_TIMEOUT, 10 ); //timeout 10 seconds } Sometimes I receive 200 OK which is good other time 301, 302, 307 which I consider good as well, but other times I receive weird status as 406, 500, 504 which should identify an invalid url but when I open it on the browser they are fine for example the script returns http://www.awe.co.uk/ => HTTP/1.1 406 Not Acceptable and wget returns wget http://www.awe.co.uk/ --2011-06-23 15:26:26-- http://www.awe.co.uk/ Resolving www.awe.co.uk... 77.73.123.140 Connecting to www.awe.co.uk|77.73.123.140|:80... connected. HTTP request sent, awaiting response... 200 OK Does anyone know which request header I am missing or adding in excess?

    Read the article

  • I have added a port to the public zone in firewalld but still can't access the port

    - by mikemaccana
    I've been using iptables for a long time, but have never used firewalld until recently. I have enabled port 3000 TCP via firewalld with the following command: # firewall-cmd --zone=public --add-port=3000/tcp --permanent However I can't access the server on port 3000. From an external box: telnet 178.62.16.244 3000 Trying 178.62.16.244... telnet: connect to address 178.62.16.244: Connection refused There are no routing issues: I have a separate rule for a port forward from port 80 to port 8000 which works fine externally. My app is definitely listening on the port too: Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN 99 36797 18662/node firewall-cmd doesn't seem to show the port either - see how ports is empty. You can see the forward rule I mentioned earlier. # firewall-cmd --list-all public (default, active) interfaces: eth0 sources: services: dhcpv6-client ssh ports: masquerade: no forward-ports: port=80:proto=tcp:toport=8000:toaddr= icmp-blocks: rich rules: However I can see the rule in the XML config file: # cat /etc/firewalld/zones/public.xml <?xml version="1.0" encoding="utf-8"?> <zone> <short>Public</short> <description>For use in public areas. You do not trust the other computers on networks to not harm your computer. Only selected incoming connections are accepted.</description> <service name="dhcpv6-client"/> <service name="ssh"/> <port protocol="tcp" port="3000"/> <forward-port to-port="8000" protocol="tcp" port="80"/> </zone> What else do I need to do to allow access to my app on port 3000? Also: is adding access via a port the correct thing to do? Or should I make a firewalld 'service' for my app instead?

    Read the article

  • Yum Error Installing Git from kernel.org Repo

    - by Lance
    I want to install the latest version of Git using yum and the RPM repository on kernel.org, but adding the repo to yum.repos.d causes yum to fail with checksum errors. The prevailing solution to this issue seems to be to simply use the repository at Webtatic as answered here on superuser. I know I can also install an older version of Git using the EPEL repo, or compile from the latest source tarball, but honestly I want to understand why I'm having issues using the kernel.org repo. Here’s the workflow, after a clean install of CentOS 5.5 and "yum update": [root]# wget -P /etc/yum.repos.d/ http://kernel.org/pub/software/scm/git/RPMS/git.repo [root]# yum clean all [root]# yum repolist Loaded plugins: fastestmirror Determining fastest mirrors * addons: mirrors.netdna.com * base: mirror.clarkson.edu * epel: serverbeach1.fedoraproject.org * extras: centos.mirror.nac.net * updates: mirror.cogentco.com addons | 951 B 00:00 addons/primary | 202 B 00:00 base | 2.1 kB 00:00 base/primary_db | 1.6 MB 00:01 epel | 3.7 kB 00:00 epel/primary_db | 2.8 MB 00:01 extras | 2.1 kB 00:00 extras/primary_db | 188 kB 00:00 git | 1.2 kB 00:00 git/primary | 155 kB 00:00 http://www.kernel.org/pub/software/scm/git/RPMS/i386/repodata/primary.xml.gz: [Errno -3] Error performing checksum Trying other mirror. git/primary | 155 kB 00:00 http://www.kernel.org/pub/software/scm/git/RPMS/i386/repodata/primary.xml.gz: [Errno -3] Error performing checksum Trying other mirror. Error: failure: repodata/primary.xml.gz from git: [Errno 256] No more mirrors to try. Any suggestions as to a solution, or details why the kernel.org repo has this issue? (Sorry I can't include more links to my references, but I don't have the reputation for that yet.)

    Read the article

  • Yum Error Installing Git from kernel.org Repo

    - by Lance
    I want to install the latest version of Git using yum and the RPM repository on kernel.org, but adding the repo to yum.repos.d causes yum to fail with checksum errors. The prevailing solution to this issue seems to be to simply use the repository at Webtatic as answered here on superuser. I know I can also install an older version of Git using the EPEL repo, or compile from the latest source tarball, but honestly I want to understand why I'm having issues using the kernel.org repo. Here’s the workflow, after a clean install of CentOS 5.5 and "yum update": [root]# wget -P /etc/yum.repos.d/ http://kernel.org/pub/software/scm/git/RPMS/git.repo [root]# yum clean all [root]# yum repolist Loaded plugins: fastestmirror Determining fastest mirrors * addons: mirrors.netdna.com * base: mirror.clarkson.edu * epel: serverbeach1.fedoraproject.org * extras: centos.mirror.nac.net * updates: mirror.cogentco.com addons | 951 B 00:00 addons/primary | 202 B 00:00 base | 2.1 kB 00:00 base/primary_db | 1.6 MB 00:01 epel | 3.7 kB 00:00 epel/primary_db | 2.8 MB 00:01 extras | 2.1 kB 00:00 extras/primary_db | 188 kB 00:00 git | 1.2 kB 00:00 git/primary | 155 kB 00:00 http://www.kernel.org/pub/software/scm/git/RPMS/i386/repodata/primary.xml.gz: [Errno -3] Error performing checksum Trying other mirror. git/primary | 155 kB 00:00 http://www.kernel.org/pub/software/scm/git/RPMS/i386/repodata/primary.xml.gz: [Errno -3] Error performing checksum Trying other mirror. Error: failure: repodata/primary.xml.gz from git: [Errno 256] No more mirrors to try. Any suggestions as to a solution, or details why the kernel.org repo has this issue? (Sorry I can't include more links to my references, but I don't have the reputation for that yet.)

    Read the article

  • Strange focus bug in Firefox (chrome vs content)

    - by Marius
    Here is a strange bug I'm experiencing in Firefox: I can only use either the chrome, or the content, not both at the same time! For example, I can click on tabs and the toolbar icons, focus the search bar and write in it as well as the address bar, but if I try to click on anything in the content (eg a link or a textfield to write something), then nothing happens. The mouse pointer doesn't change either, it just stays a pointer when I hover over things, and the links I hover don't react either. But if I alt-tab to another program (or click on it in the taskbar), then back to Firefox, then I can use the area that I click on. So if I click somewhere on the webpage to get focus back to Firefox, then I can click on links and write things (like this text), but I cannot click on tabs or refresh or anything else in the chrome. I can't even click on the minimize, restore and close icons! To get focus back on the chrome I have to alt-tab to another program, and then click on the chrome to get back to Firefox to be able to use the chrome again. I've tried closing and starting it again, but the bug is still there. I have experienced this before, but I don't remember what I did to fix it. This bug seems to occur sometimes when I wake up the computer from standby, but I leave by computer in standby all the time, so that is not the only factor.

    Read the article

  • How come my Intel 520 180GB SSD performs extremely poorly?

    - by Willem
    I recently installed a new Intel 520 series 180GB SSD in my brand new MacBook Pro. The system is as follows: Model: MacBook Pro 15-inch, Late 2011 (MacBookPro8,2) Processor: 2.4 GHz Intel Core i7 Memory: 16 GB 1333 MHz DDR3 Graphics: AMD Radeon HD 6770M 1024 MB Software: Mac OS X Lion 10.7.3 Main Drive Bay: Intel 520-series 180GB SATA-3 (6GB/s negotiated link) SSD (Firmware: 400i) [80GB free] Optical Bay: Toshiba 5400 RPM 750GB SATA-2 HDD Trim: Enabled (according to Trim Enabler App) And here are the speeds I'm getting: Read: 412 MB/s Write: 186 MB/s What have I done wrong? Results expected: Read/write both 500MB/s I have seen benchmarks with lesser SSD:s (SATA-2 even) outperform my write-speeds by far. Also, Intel 520 SSD:s are supposed to be the top class of SSD:s. Trim Enabler report: This looks a bit odd compared to screenshots from their site: These is the defined S.M.A.R.T attributes (taken from Intel): And here are my S.M.A.R.T attributes read using smartctl tool from smartmontools: They don't seem very compatible. I'm going to try and look for a S.M.A.R.T attributes reader tool for OS X which might support Intel 520 series.

    Read the article

  • Using a named pipe to simulate a serial port on a VMware virtual machine (linux host and client)

    - by Dave M
    Trying to write a python program to create a simulated data stream and feed it, through a named pipe, to a VMware virtual machine. The host is running Ubuntu 11.10 and VMware player 5.0.0. The Vm is running Ubuntu netbook 10.04. I am able to get the pipe working on the local machine but I am not able to get the pipe to pass data through the virtual serial port to the programs running on the virtual machine. #!/usr/bin/python import os # # Create a named pipe that will be used as the serial port on a VMware virtual machine SerialPipe = '/tmp/gpsd2NMEA' try: os.unlink(SerialPipe) except: pass os.mkfifo(SerialPipe) # # Open the named pipe NMEApipe = os.open(SerialPipe, os.O_RDWR|os.O_NONBLOCK) # # Write a string to the named pipe NMEAtime = "235959" os.write(NMEApipe, str( '%s\n' % NMEAtime )) Test to see if the python program is working on the host machine (displays 235959 if data is passing through the pipe) $ cat /tmp/gpsd2NMEA 235959 Serial port as defined in the VMware .vmx file: serial0.present = "TRUE" serial0.startConnected = "TRUE" serial0.fileType = "pipe" serial0.fileName = "/tmp/gpsd2NMEA" serial0.pipe.endPoint = "client" serial0.autodetect = "FALSE" serial0.tryNoRxLoss = "TRUE" serial0.yieldOnMsrRead = "TRUE" Test to see if the serial port in the VM is receiving data $ cat /dev/ttyS0 or $ minicom -D /dev/ttyS0 or $ stty -F /dev/ttyS0 cs8 -parenb -cstopb 115200 $ echo < /dev/ttyS0 None of these display any data from the python program.

    Read the article

  • Conditionally permitting HTTP-only requests to Tomcat?

    - by Mike
    I have 2 versions of a system: Tomcat webserver Nginx reverse-proxy sitting in front of a tomcat webserver. In version 2, nginx only ever talks to Tomcat over HTTP. A user could configure the system so that only HTTPS requests are allowed. If the user does this in Version 1 and then the XML configuration files for Tomcat takes care of this. In version 2, nginx takes care of this. The problem is this: I cannot force a user to update their Tomcat XML config files when they upgrade from version 1 to version 2 (it will be recommended that they do so) because this is done as part of a larger process. This means that if they upgrade and don't update the Tomcat config, an HTTPS request will arrive at nginx, which will proxy it over HTTP to Tomcat which will reject the request because it is not HTTPS. So I can't force an update to the Tomcat XML, and I have to use HTTP between nginx and Tomcat. Any ideas? Is there some way I can affect how Tomcat reads its config in Version 2 so that it ignores the HTTPS-only section?

    Read the article

  • nginx: how do I add new site/server_name in nginx?

    - by Neo
    I'm just starting to explore Nginx on my Ubuntu 10.04. I installed Nginx and I'm able to get the "Welcome to Nginx" page on localhost. However I'm not able to add a new server_name, even when I make the changes in site-available/default file. Tried reloading/restarting Nginx, but nothing works. One interesting observation. "http://mycomputername" in browser works. So somehow there is a command like 'server_name $hostname' somewhere over-riding my rule. File: sites-available/mine.enpass server { listen 80; server_name mine.enpass ; access_log /var/log/nginx/localhost.access.log; location / { root /var/www/nginx-default; index index.html index.htm; } } File: nginx.confg user www-data; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; }

    Read the article

< Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >