Search Results

Search found 41598 results on 1664 pages for 'segmentation fault'.

Page 292/1664 | < Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >

  • What am I doing wrong with this cat 6 patch panel wiring?

    - by Max Hodges
    top number is transmitter bottom number is remote terminator 12345678 36145278 is this because I could be mixing T568A and T568B wiring? how do I know if my patch cord is A or B? Do I just look at the plug and match it up with the diagram on the back of the panel somehow? EDIT I read that 36145278 indicates a cross over cable, but I'm not trying to make a cross over. Where did I go wrong? I'm guessing the cable plug is T568A but I wired it to the panel using T568B. So I need to redo it as T568A. But in the future how do I know if I cable is A or B? Cheers!

    Read the article

  • How to push configurations to multiple Cisco Switches

    - by nixda
    Assume I have around 50 Cisco IE2000 Switches connected together and I want to reconfigure some settings, the same settings for every switch. Normally I would open a command line session via Putty and paste the commands. But as the number of switches is growing, even this method takes its time. I am aware of Kiwi CatTools. Unfortunately it's not free so I'm wondering if there are other efficient ways to configure a large number of Cisco switches.

    Read the article

  • Server have 2 psu, can i only turn on 1 psu, to reduce cost in colocation?

    - by Earl
    i just got a server & want to colocation it in datacenter server details : HP DL380, 2x intel Xeon (3,06GHz/533, 512KB L2 Cache), 8x Fans, Form Factor Rack (2U), 2x 400W Power Supplies, the server have 2 psu, can i only turn on 1 psu, to reduce cost in colocation? will the server still running good? the standart colocation packages in my city only give default power 400w, if need additional power 400w need additional cost about $40-60 again permonth please give suggestion from your experience

    Read the article

  • Windows Server Backup 2012 error - "The version does not support this version of the file format."

    - by Cylindric
    I'm trying out Windows Server 2012, specifically Windows Server Backup, as it would be a very useful way to see us through until we can upgrade our 'real' backup system. I'm backing up to a network share, and a Server 2008 WSB works fine. On the 2012 server however, I get an error: Backup of volume \?\Volume{298d1a7d-f80f-11e1-93e8-806e6f6e6963}\ has failed. The version does not support this version of the file format. It's a VHDX written by WSB, so I'm not sure what version it's complaining about. I can see a whole bunch of files in the destination, so I don't think it's a simple authentication issue, but I only get about 8Mb of VHDX written.

    Read the article

  • jboss 5.1 mysql connection pooling

    - by boyd4715
    I am using JBOSS 5.1.0.GA, MySQL 5.5 and Hibernate 3.3.1 GA (included with JBOSS) + Spring. My question is do I need to add c3p0 as a data source in my spring/hibernate configuration for connection pooling or are the setting in the JBOSS mysql-ds.xml setting enough. My mysql-ds.xml is the following: <datasources> <local-tx-datasource> <jndi-name>MySqlDS</jndi-name> <connection-url>jdbc:mysql://localhost:3306/ecotrak</connection-url> <driver-class>com.mysql.jdbc.Driver</driver-class> <user-name>ecotrak</user-name> <password>ecotrak</password> <min-pool-size>5</min-pool-size> <max-pool-size>20</max-pool-size> <idle-timeout-minutes>5</idle-timeout-minutes> <exception-sorter-class-name>org.jboss.resource.adapter.jdbc.vendor.MySQLExceptionSorter</exception-sorter-class-name> <!-- should only be used on drivers after 3.22.1 with "ping" support --> <valid-connection-checker-class-name>org.jboss.resource.adapter.jdbc.vendor.MySQLValidConnectionChecker</valid-connection-checker-class-name> <!-- sql to call when connection is created <new-connection-sql>some arbitrary sql</new-connection-sql> --> <!-- sql to call on an existing pooled connection when it is obtained from pool - MySQLValidConnectionChecker is preferred for newer drivers <check-valid-connection-sql>some arbitrary sql</check-valid-connection-sql> --> <!-- corresponding type-mapping in the standardjbosscmp-jdbc.xml (optional) --> <metadata> <type-mapping>mySQL</type-mapping> </metadata> </local-tx-datasource> </datasources>

    Read the article

  • What cloud backup solution supports a "backup server to cloud" configuration?

    - by Gepeto
    What online backup tool allows you to: A) Back up Windows, Linux and optionally Mac desktops and servers to the cloud B) Do so by first backing up to a central server or appliance C) Allow restoring from that appliance when possible and if not go to the cloud For now the best option I have seen is i365 by seagate with an appliance between the local computers and the cloud. I know Microsoft also has an i365 plugin for DPM, as well as an Iron Mountain plugin. However, I feel that there must be a simpler way to do this. Can any of the "simpler" solutions like Jungle Disk or anything else going to s3, Mozy, Carbonite, Crashplan, etc do this? Thank you

    Read the article

  • VisualSVN Server won't work with AD, will with local accounts

    - by frustrato
    Decided recently to switch VisualSVN from local users to AD users, so we could easily add other employees. I added myself, gave Read/Write privileges across the whole repo, and then tried to log in. Whether I'm using tortoisesvn or the web client, I get a 403 Forbidden error: You don't have permission to access /svn/main/ on this server. I Googled a bit, but only found mention of phantom groups in the authz file. I don't have any of those. Any ideas? It works just fine with local accounts. EDIT: Don't know why I didn't try this earlier, but adding the domain before the username makes it work, ie MAIN/Bob. This normally only works when there are conflicting usernames...one local, one in AD, but for whatever reason it works here too. Kinda silly, but I can live with it.

    Read the article

  • Setting default version on Azure Blob Storage?

    - by Erik
    What is the easiest way, without having to create your own utility, to set the default service version to the latest in Azure Blob Storage ? http://msdn.microsoft.com/en-us/library/windowsazure/dd894041 There is basically nothing to be set in the Azure portal and I am having a difficult time finding working utilities to use for Azure. For some reason Azure is defaulting to the oldest version which does not send things like the http range header for example. Any utility that can do this ? Thank you.

    Read the article

  • how to uninstall mariadb and re-install mysql ? Mysql install turns into mariadb install

    - by Suma
    I recently upgraded my centos system via the desktop. mistake! I had mariadb, phpmyadmin working just fine before - but after the upgrade they stopped. I frantically googled and tried to follow some tutorials about mariadb * mysql reinstall untill I came to this one: http://centosforge.com/node/how-replace-mysql-mariadb-centos-6-including-mysql-uninstall-instructions-and-yum-install I executed this command to remove all of mysql: yum remove mysql-server mysql-libs mysql-devel mysql* and then tried to reinstall mysql: as below - it crashes with errors as follows: ***************************************************************** [root@localhost ~]# yum install mysql-server mysql mysql-devel ***************************************************************** Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: centos.serverspace.co.uk * extras: centos.serverspace.co.uk * rpmforge: www.mirrorservice.org * updates: mirror.rmg.io Setting up Install Process Package mysql-server is obsoleted by MariaDB-server, trying to install MariaDB-server-5.5.29-1.i686 instead Package mysql is obsoleted by MariaDB-server, trying to install MariaDB-server-5.5.29-1.i686 instead Package mysql-devel is obsoleted by MariaDB-devel, trying to install MariaDB-devel-5.5.29-1.i686 instead Resolving Dependencies --> Running transaction check ---> Package MariaDB-devel.i686 0:5.5.29-1 set to be updated --> Processing Dependency: MariaDB-common for package: MariaDB-devel ---> Package MariaDB-server.i686 0:5.5.29-1 set to be updated --> Processing Dependency: libssl.so.10 for package: MariaDB-server --> Processing Dependency: libcrypto.so.10 for package: MariaDB-server --> Running transaction check ---> Package MariaDB-common.i686 0:5.5.29-1 set to be updated --> Processing Dependency: MariaDB-compat for package: MariaDB-common ---> Package MariaDB-server.i686 0:5.5.29-1 set to be updated --> Processing Dependency: libssl.so.10 for package: MariaDB-server --> Processing Dependency: libcrypto.so.10 for package: MariaDB-server --> Running transaction check ---> Package MariaDB-compat.i686 0:5.5.29-1 set to be updated ---> Package MariaDB-server.i686 0:5.5.29-1 set to be updated --> Processing Dependency: libssl.so.10 for package: MariaDB-server --> Processing Dependency: libcrypto.so.10 for package: MariaDB-server --> Finished Dependency Resolution MariaDB-server-5.5.29-1.i686 from mariadb has depsolving problems --> Missing Dependency: libcrypto.so.10 is needed by package MariaDB-server-5.5.29-1.i686 (mariadb) MariaDB-server-5.5.29-1.i686 from mariadb has depsolving problems --> Missing Dependency: libssl.so.10 is needed by package MariaDB-server-5.5.29-1.i686 (mariadb) Error: Missing Dependency: libcrypto.so.10 is needed by package MariaDB-server-5.5.29-1.i686 (mariadb) Error: Missing Dependency: libssl.so.10 is needed by package MariaDB-server-5.5.29-1.i686 (mariadb) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest [root@localhost ~] If I now try to install libssl.10, i get asked to install glibc libraries. 2.17 and 2.7 - other discussions have said to stay clear of the as this will explode my system - I tried download 2.17 and it's huge - took ages to unzip. Could someone please help me to completelty remove maraidb and install mysql - so that I don't get the above errors and pushed over to mariadb when I run: yum install mysql-server mysql mysql-devel There are tons of material on how to install mariadb - but none i found so far that plainly explains how to go backwards to mysql.

    Read the article

  • Monit unable to start sidekiq on Opsworks server

    - by webdevtom
    I have used AWS Opsworks to create some servers. I have Sidekiq running as part of my Rails application. When I deploy Sidekiq restarts nicely. I am configuring Monit to watch the pid and start and stop Sidekiq if there are any issues. However when Monit trys to start Sidekiq I see that the wrong Ruby looks to be used. Oct 17 13:52:43 daitengu sidekiq: /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.3.4/lib/bundler/definition.rb:361:in `validate_ruby!': Your Ruby version is 1.8.7, but your Gemfile specified 1.9.3 (Bundler::RubyVersionMismatch) Oct 17 13:52:43 daitengu sidekiq: from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.3.4/lib/bundler.rb:116:in `setup' Oct 17 13:52:43 daitengu sidekiq: from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.3.4/lib/bundler/setup.rb:17 When I run the command from the cli Sidekiq launches correctly. $> cd /srv/www/myapp/current && RAILS_ENV=production nohup /usr/local/bin/bundle exec sidekiq -C config/sidekiq.yml >> /srv/www/myapp/shared/log/sidekiq.log 2>&1 & $> ps -aef |grep sidekiq root 1236 1235 8 20:54 pts/0 00:00:50 sidekiq 2.11.0 myapp [0 of 25 busy] My sidekiq.monitrc file check process unicorn with pidfile /srv/www/myapp/shared/pids/unicorn.pid start program = "/bin/bash -c 'cd /srv/www/myapp/current && /usr/local/bin/bundle exec unicorn_rails --env production --daemonize -c /srv/www/myapp/shared/config/unicorn.conf'" stop program = "/bin/bash -c 'kill -QUIT `cat /srv/www/myapp/shared/pids/unicorn.pid`'"

    Read the article

  • What is the best powershell script to restore an SQL Database?

    - by EtienneT
    To restore an SQL Server 2008 database, I would lile to be able to just do something like this in powershell: ./restore.ps1 DatabaseName.bak Then the powershell script would by convention restore it to a database with name "DatabaseName". It would disconnect any user connected to this database so that it can restore the DB. It would store the mdf and ldf in the default location. This would mainly be while developing on my personal machine. Just a quick way to restore a DB. Anyone has such a script? Thanks

    Read the article

  • Ubuntu in VirtualBox File Modified Time in Future and PHP slow file operations

    - by user1750
    For some reason, some of my files have a last modified date in the future. In addition to this, file operations in PHP are SUPER slow. For example, rebuilding the Symfony2 cache can take over 40 seconds (its takes 1-2 on my MacBook Pro). Notice the time for ListingsCRUDController.php. It just says "2012". In order see the date more clearly I ran ls --time-style="full-iso" -l For some reason it shows that this file's last modified date is ~5 hours into the future. System time: To make things more confusing, the system will intermittently speed up. Suddenly, my app will start serving requests in 1-2 seconds (down from 40 seconds) for no apparent reason. I mean I don't do anything to my code/system config - it just changes. Also, during a slow PHP request, the php5-fpm process (nginx) uses 100% of the CPU for the duration of the request. This is the second VM this has happened on and I need to know why its doing this. It has become unusable. Information About My Setup VirtualBox 4.2.0 Host: Macbook Pro Guest: Ubuntu Server 12.04 Package dkms is installed Timezones match for Ubuntu and PHP. Things I've Tried Both Apache and Nginx. APC enabled and disabled. Xdebug enabled and disabled. 1 processor up to 4 processors. 1gb memory up to 4gb memory. I've installed Ubuntu using the regular kernel and the VM kernel.

    Read the article

  • Content server backups

    - by Dan Sosedoff
    What is the best way to backup data on content servers? For example, I have 15 servers that just have content, no applications running on it. Each server has a 250 GB hard drive. So, it's a pretty big amount of data. All the data have external access (via HTTP). So, the question is: what methodology is best in my case? The most useful method I know is cross-backup: when each server contains its own data and backup of one other server. But, there is significant reduction in total capacity. RAID?

    Read the article

  • Configure Nagios To Alert Depending On Host Group That Service Alert Originates From

    - by StrangeWill
    So my setup: Services are shared between all hosts (CPU/RAM/Disk/Services). Hosts are split into two main groups: "Production" and "Development". We have two contact groups: "Production" and "Development". Lets say my development SQL server runs low on RAM, I want it to only alert those in "Development" contact group (this service is of course assigned to a host in the "Development" host group, using the shared RAM monitoring service). I'm pretty much stumped on this... I can't configure it at the service level (they're shared there), and I can't seem to get escalations to do it for me either... Do I need to use service groups along with escalations and bite the bullet on building that list? Or am I missing something stupidly simple? I'm using Centreon for configuration if that helps.

    Read the article

  • WS2008 NTP - Using time.windows.com,0x9 - Time always skewed forwards

    - by David
    I have a domain controller configured to use time.windows.com (with 0x09 flags set). I've noticed that frequently the systems' clock is fast - it varies from 10 minutes to even 45 minutes. I always have to keep resetting the system date/time back to what it should be. When I run "w32tm /query /source" it tells me it's using time.windows.com, and obviously I trust Microsoft not to serve incorrect times, but why is my server's clock fast? EDIT: There are a few Time-Service events in the System log: Event ID: 142 Message: The time service has stopped advertising as a time source because the local clock is not synchronized. Event ID: 139 Message: The time service has started advertising as a time source. These two messages appear in pairs every hour or so. Event 142 appears 14 to 16 minutes after 139 appears. Going back a few months, these events appear: Event ID: 35 Message: The time service is now synchronizing the system time with the time source time.windows.com,0x9 (ntp.m|0x9|0.0.0.0:123-65.55.21.21:123). Event ID: 37 Message: The time provider NtpClient is currently receiving valid time data from time.windows.com,0x9 (ntp.m|0x9|0.0.0.0:123-65.55.21.21:123). Event ID: 47 Message: Time Provider NtpClient: No valid response has been received from manually configured peer time.windows.com,0x9 after 8 attempts to contact it. This peer will be discarded as a time source and NtpClient will attempt to discover a new peer with this DNS name. The error was: The time sample was rejected because: The peer is not synchronized, or it has been too long since the peer's last synchronization. These three events only appear once in the log, back in October.

    Read the article

  • Variable size encrypted container

    - by Cray
    Is there an application similar to TrueCrypt, but the one that can make variable size containers opposed to fixed-size or only-growing-to-certain-amount containers which can be made by TrueCrypt? I want this container to be able to be mounted to a drive/folder, and the size of the outer container not be much different from the total size of all the files that I put into the mounted folder, while still providing strong encryption. If to put it in other words, I want a program like truecrypt, which not only automatically grows the container if I put in new files, but also decreases it's size if some files are deleted. I know there are some issues of course, and it would not work 100% as truecrypt, because it basically works on the sector level of the disk, giving all the filesystem-control to the OS, and so when I remove a file, it might as well be left there, or there might be some fragmentation issues that would stop just truncating the volume from working, but perhaps a program can be built in some other way? Instead of providing sector-level interface, it would provide filesystem-level interface? A filesystem inside a file which would support shrinking when files are deleted?

    Read the article

  • Looking for zsh completion file for osX native commands

    - by Chiggsy
    I've been digging deep into what actually comes with osX in /usr/bin and especially /usr/libexec. Quite good stuff really, although the command syntax is a bit.. odd. Let me direct the curious to the command that made me think of this: networksetup -printcommands I can not think of a command that better illustrates the need for good completion. security -h perhaps, but those commands have a familiar easy-to-read format. I beseech the community, please point me to a place where I can find such a thing. I never type them right, and I ache for tab completion for this Anyone have any idea where I could grab something? I'd prefer to stand on the shoulders of giants instead of trying to make a zsh/bash completion script leap into the world, ready for battle, like Athena, from my forehead. I am no Zeus when it comes to compctl. Not at all.

    Read the article

  • Using Android 2.0 with Exchange server behind firewall

    - by Yurik
    Blackberry has an enterprise server that works with MS Exchange, and pushes all updates to the user's blackberry devices. We are considering switching to Android phones, which supposedly support Exchange servers. Will there be a similar program for Android? Are there any configuration changes that have to be done on Exchange server? (Exchange is behind the corp firewall).

    Read the article

  • puppet duplicate resources and virtual resources

    - by user45097
    Overview Hi just started using Puppet and have been unable to suss something. Problem Because of normalization when I add 2 classes to a node with packages that have the same dependencies it fails. In simple terms have duplicate resources - in this case the package libssl. Note: packages are being held to prevent latest packages being installed. QUestion What's the best practice way to get round this? class ssh { package { 'openssh-server': ensure = installed, require = libssl } package { 'libssl': ensure = installed, } } class apache { package { 'apache': ensure = installed, require = libssl, } package { 'libssl': ensure = installed, } } node server { include apache include openssl-server

    Read the article

  • HA Proxy won't load balance my web requests. What have I done wrong?

    - by Josh Smeaton
    I've finally got HA Proxy set up and running in a way I think I want. However, it is not load balancing the web requests it receives. All requests are currently being forwarded to the first server in the cluster. I'm going to paste my configuration below - if anyone can see where I may have gone wrong, I'd appreciate it. This is my first stab at configuring web servers in a *nix environment. First up, I have HA Proxy running on the same host as the first server in the apache cluster. We are moving these servers to virtual later on, and they will have different virtual hosts, but I wanted to get this running now. Both web servers are receiving their health checks, and are reporting back correctly. The haproxy?stats page correctly reports servers that are up and down. I've tested this by altering the name of the file that is checked. I haven't put any load onto these servers yet. I've just opened up the URLs on several tabs (private browsing), and had several co-workers hit the URL too. All of the traffic goes to WEB1. Am I balancing incorrectly? global maxconn 10000 nbproc 8 pidfile /var/run/haproxy.pid log 127.0.0.1 local0 debug daemon defaults log global mode http retries 3 option redispatch maxconn 5000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen WEBHAEXT :80,:8443 mode http cookie sessionbalance insert indirect nocache balance roundrobin option httpclose option forwardfor except 127.0.0.1 option httpchk HEAD health_check.txt stats enable stats auth rah:rah server WEB1 10.90.2.131:81 cookie WEB_1 check server WEB2 10.90.2.130:80 cookie WEB_2 check

    Read the article

  • Does Hyper-V support SCSI Pass-through discs in a Server 2003 R2 VM?

    - by Peter Bernier
    I'm running into some difficulties getting pass-through disks to be accessible to a Hyper-v server 2003 r2 virtual machine. Host OS : Server 2008 R2 full w/Hyper-V role Guest OS : Server 2003 R2 (Windows Home Server) The guest's OS disk is a pass-through disk on the IDE controller (not the best solution, but I can live with it). My storage disks will be pass-through disks on the SCSI controller. I'm able to see all of the disks that I'll be using for the VM on the host without issue. The problem that I'm having is that I can't seem to get the guest OS to be able to 'see' the storage drives (as pass-through disks on the SCSI controller). Here's what I'm doing : On the host, the storage drive is set to 'Offline' just like the OS disk (this is required for pass-through to work). In the VM, the storage drive is on the SCSI controller. Hyper-V Integration Tools are installed in guest. That's as far as I'm able to get. I don't see the drive in Computer Management, or in Windows Explorer (I've tried with an unformatted disk, as well as after formatting a partition). I am able to see a removable device that lists the disk's model number in the Guest, but I can't seem to access the storage. (I get an entry in Device Manager that needs drivers, but nothing on the Integration Tools disc works..) Trouble-shooting steps I've tried : If put the pass-through drive on the IDE controller, I can see it in the Guest. If put the storage drive 'Online' in the host and create a VHD on it on the SCSI controller, I can see it in the Guest. I suppose I could create a fixed-size VHD that consumes the entire disk, but I'd rather not have that overhead. I've also extracted the contents of the Integration Tools drivers (x86 and amd64) and tried pointing the disk controller to each of those, with no luck. Can anyone offer suggestions as to how I can get this to work properly?

    Read the article

  • General logging won't work in MySQL

    - by leonstr
    I saw on SF that there's an option in MySQL to log all queries. So, in my version (mysql-server-5.0.45-7.el5 on CentOS 5.2) this appears to be a case of enabling the 'log' option, so I edited /etc/my.cnf to add this: [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql old_passwords= log=/var/log/mysql-general.log [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid I then created the file and set permissions: # touch /var/log/mysql-general.log # chown mysql. /var/log/mysql-general.log # ls -l /var/log/mysql-general.log -rw-r--r-- 1 mysql mysql 0 Jan 18 15:22 /var/log/mysql-general.log But when I start mysqld I get: 120118 15:24:18 mysqld started ^G/usr/libexec/mysqld: File '/var/log/mysql-general.log' not found (Errcode: 13) 120118 15:24:18 [ERROR] Could not use /var/log/mysql-general.log for logging (error 13). Turning logging off for the whole duration of the MySQL server process. To turn it on again: fix the cause, shutdown the MySQL server and restart it. 120118 15:24:18 InnoDB: Started; log sequence number 0 182917764 120118 15:24:18 [Note] /usr/libexec/mysqld: ready for connections. Can anyone suggest why this isn't working?

    Read the article

< Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >