Search Results

Search found 73305 results on 2933 pages for 'copy run start'.

Page 664/2933 | < Previous Page | 660 661 662 663 664 665 666 667 668 669 670 671  | Next Page >

  • Is my Cisco switch port bad?

    - by ewwhite
    I've been chasing a packet-loss and network stability issue for a handful of end-users on an internal network for the past few days... These issues surfaced last week, however the location was struck by lightning six weeks ago. I was seeing 5-10% packet loss between a stack of four Cisco 2960's and several PC's and phones on the other side of a 77-meter run. The PC's were run inline with the phones over a trunked link (switchport configuration pastebin). We were seeing dropped calls and interruptions in client-server applications and Microsoft Exchange connectivity. I tried the usual troubleshooting steps remotely, having a local technician do the following during breaks in user and production activity: change cables between the wall jack and device. change patch cables between the patch panel and switch port(s). try different switch ports within the 2960 stack. change end-user devices with known-good equipment (new phones, different PC's). clear switch port interface counters and monitor incrementing errors closely. (Pastebin output of sh int) Pored over the device logs and Observium RRD graphs. No link up/down issues from the switch side. change power strips on the end-user side. test cable runs from the Cisco 2960 using test cable-diagnostics tdr int Gi4/0/9 (clean)* test cable runs with a Tripp-Lite cable tester. (clean) run diagnostics on the switch stack members. (clean) In the end, it took three changes of switch ports to find a stable solution. The only logical conclusion is that a few Cisco 2960 switch ports are bad or flaky... Not dead, but not consistent in behavior either. I'm not used to seeing individual ports die in this manner. What else can I test or check to determine if these devices are bad? Is it common for single ports to have problems, rather than a contiguous bank of ports? BTW - show cable-diagnostics tdr int Gi4/0/14 is very cool... Interface Speed Local pair Pair length Remote pair Pair status --------- ----- ---------- ------------------ ----------- -------------------- Gi4/0/14 1000M Pair A 79 +/- 0 meters Pair B Normal Pair B 75 +/- 0 meters Pair A Normal Pair C 77 +/- 0 meters Pair D Normal Pair D 79 +/- 0 meters Pair C Normal

    Read the article

  • WEB based HPC cluster node management

    - by Skuja
    Hello, i am working on my school diploma thesis. The main goal is to create web based application where logged users could see free and busy nodes, turn them on and off, see what process they are running etc. Figured out that i could do something like this - write some cron daemon that would run every 30seconds or so, and it could run ping utility for each node to find out if it is on or off, then write results to some file. Then from my web app (i will write in PHP) i could read the info. Will it be a good solution? How would you suggest me to do it? And finally, is there any existing solutions (it may not be a definetly ewb based) for managment of cluster nodes?

    Read the article

  • Working around Gmail mailing-list “feature.”

    - by Paul J. Lucas
    I'm using Google Apps for my domain's e-mail via IMAP. Whenever I send mail to a mailing list, I don't receive a copy of my own mail back in my inbox. According to Google, this is a "feature." Is there a way to disable this "feature" so that all mail I send to mailing lists appears in my inbox just like all other e-mail? Perhaps something along the lines of this method for disabling Google's spam filter??

    Read the article

  • Encoding multiple video streams with a single avconv invocation

    - by automatthias
    I played with avconv on Ubuntu and I'm now able to e.g. record the desktop with sound from a soundcard. One thing I wanted to do was recording two video inputs at the same time, for instance the desktop and from the webcam. I thought about doing something like this: avconv \ -f alsa \ -i default \ -acodec flac \ -f video4linux2 \ -r 6 \ -i /dev/video0 \ -f x11grab \ -i :0.0 \ out.mkv My thinking was that if you define multiple video inputs, and the .mkv format can handle multiple video streams, avconv will encode 2 video streams and 1 audio stream into one file. But this isn't what happens: avconv version 0.8.4-6:0.8.4-0ubuntu0.12.10.1, Copyright (c) 2000-2012 the Libav developers built on Nov 6 2012 16:51:11 with gcc 4.7.2 [alsa @ 0x1091bc0] capture with some ALSA plugins, especially dsnoop, may hang. [alsa @ 0x1091bc0] Estimating duration from bitrate, this may be inaccurate Input #0, alsa, from 'default': Duration: N/A, start: 1354364317.020350, bitrate: N/A Stream #0.0: Audio: pcm_s16le, 48000 Hz, 2 channels, s16, 1536 kb/s [video4linux2 @ 0x10923e0] Estimating duration from bitrate, this may be inaccurate Input #1, video4linux2, from '/dev/video0': Duration: N/A, start: 100607.724745, bitrate: 29491 kb/s Stream #1.0: Video: rawvideo, yuyv422, 640x480, 29491 kb/s, 6 tbr, 1000k tbn, 6 tbc [x11grab @ 0x107b2a0] device: :0.0+83,87 -> display: :0.0 x: 83 y: 87 width: 854 height: 480 [x11grab @ 0x107b2a0] shared memory extension found [x11grab @ 0x107b2a0] Estimating duration from bitrate, this may be inaccurate Input #2, x11grab, from ':0.0+83,87': Duration: N/A, start: 1354364318.488382, bitrate: 196761 kb/s Stream #2.0: Video: rawvideo, bgra, 854x480, 196761 kb/s, 15 tbr, 1000k tbn, 15 tbc Incompatible pixel format 'bgra' for codec 'mpeg4', auto-selecting format 'yuv420p' [buffer @ 0x107fcc0] w:854 h:480 pixfmt:bgra [avsink @ 0x10bdf00] auto-inserting filter 'auto-inserted scaler 0' between the filter 'src' and the filter 'out' [scale @ 0x10dc680] w:854 h:480 fmt:bgra -> w:854 h:480 fmt:yuv420p flags:0x4 Output #0, matroska, to '.../out.mkv': Metadata: encoder : Lavf53.21.0 Stream #0.0: Video: mpeg4, yuv420p, 854x480, q=2-31, 4000 kb/s, 1k tbn, 15 tbc Stream #0.1: Audio: libvorbis, 48000 Hz, 2 channels, s16 Stream mapping: Stream #2:0 -> #0:0 (rawvideo -> mpeg4) Stream #0:0 -> #0:1 (pcm_s16le -> libvorbis) Press ctrl-c to stop encoding [mpeg4 @ 0x10bd800] rc buffer underflow ^Cframe= 160 fps= 15 q=2.0 Lsize= 3414kB time=10.66 bitrate=2623.0kbits/s video:3273kB audio:131kB global headers:4kB muxing overhead 0.165600% Received signal 2: terminating. I'm not sure if it's the question of mapping (some -map options to add?) or that avconv just can't encode more than 1 video stream at one time. So is it an actual avconv limitation, or a limitation of the available containers, or me simply not finding the right combination of command line options?

    Read the article

  • Clonezilla-SE with another DHCP Server in LAN

    - by aleroot
    I want to install Clonezilla-Server(192.168.1.100) in a network that already have a DHCP Server (dd-wrt with dnsmasq - 192.168.1.1). I've installed Clonezilla-SE on ubuntu Server 10.10, once installed and configured Clonezilla Server i've removed the DHCP-Server and set pxe server address in dnsmasq configuration on DHCP Server : dhcp-boot=pxelinux.0,,192.168.1.100 When i try to start from PXE a Computer in the network clonezilla start, but give me an error that the ipddress of the machine is not given by the clonezilla server and can't continue ... Someone has already tried to configure Clonezilla-SE in a similar enviroment? Is there some configuration on DRBL server of Clonezilla that i need to do ?

    Read the article

  • Could a too low voltage during long periods damage my computer fan?

    - by Sopalajo de Arrierez
    Computer fans use to run at 12Volt, but most as for today they allow 9Volt or even less to slow down the fan speed (RPM, Revolutions Per Minute). In cases of too low voltage, the fan stops, but I can see it "trying to start again". For example: my Tacens 9dB fan stops at about 10 Volts, but to start it again, 10.5Volt is not enough, and the engine tries to move the fan (I can see a small movement as an "attempt" to move) each 1-5 seconds, but it does not succees, so the fan keeps at 0 RPM. Maybe that "attempt" to move could damage the internal engine of the fan when it last for hours or days?

    Read the article

  • HOw make new copied file always 777 permission

    - by Master
    I have one Linux ext3 partition shared on network. Now when some one copy files from MAc , then other people can't change the file dute to permission problem. Is there any way that ane new file which is copied will always have 777 permission and some specific user as owner of file not the default user thanks

    Read the article

  • dhcrelay running as both DHCP and DHCPv6 relay agent on CentOS 6.2

    - by Tibor
    I am trying to set up a DHCP relay agent that would relay DHCP requests for both IPv4 and IPv6. I am using CentOS 6.2 and I am using the dhcrelay from the ISC DHCP implementation. I would like to set it up as a service, but the man page for dhcrelay states: -6 Run dhcrelay as a DHCPv6 relay agent. Incompatible with the -4 option. -4 Run dhcrelay as a DHCPv4/BOOTP relay agent. This is the default mode of operation, so the argu- ment is not necessary, but may be specified for clarity. Incompatible with -6. It seems that the -6 and -4 options are incompatible. How would I still make it work for both protocols without rolling my own service wrapper for both cases?

    Read the article

  • Problem in listening to multicast in multihomed Linux server

    - by Lior
    I am trying to write a multicast client on a machine with two NICs, and I can't make it work. I can see with a sniffer that once I start the program the NIC (eth4) start receiving the multicast datagrams: y.y.y.y. (some ip) - z.z.z.z (multicast ip, not my eth4 NIC IP) UDP Source port: kkk (some other port) Destination port: xxx (multicast port) However, I can't get those packets using my program (listening to port xxx on eth4). I also added: route add 224.0.0.0 netmask 240.0.0.0 dev eth4 Searched the web for some examples/explanations, but it seems like I do what everybody else does. Any help will be appreciated. is there anything else to do with route/iptables?

    Read the article

  • How can I setup my local Nginx server so I can edit the files?

    - by Shane Grant
    I have my local development machine running Arch Linux, Nginx, PHP-FPM and MySQL. In order for the websites I am working on to run the files need to be owned by the http user. The files are currently located in folders like this: /srv/http/site1/ /srv/http/site2/ When I use the following chown command on the http folder the sites work fine, but I cannot edit the files with my user: chown -R http.users /srv/http When I do this the sites do not work, but I can edit the files: chown -R shane.http /srv/http How can I make it so that my user can edit the files, and the web server can run them at the same time? Thank you

    Read the article

  • To Delete or Not to Delete Arcserve Makeup Jobs?

    - by Cliff Racer
    Every once in a while, I have a backup job that fails with my backup server running ARCserve 12.5. A failure results in the creation of a 'makeup' job. I run the makeup job and even if it completes it stays in my job cue. These have piled up over the past couple of years and I find myself wondering if its ok to delete them knowing that ARCserve relies on a sql database to catalog backup info and data. If I run a makeup job, can I just delete it afterward? Should I just collect them? I have not seen anything so far that makes me feel confident with what to do with these leftover jobs.

    Read the article

  • EC2-like virtualisation platform with non-RDP access

    - by code'
    I'm looking into setting up a few small VMs. The trouble is the software I intend to install on them (Cisco VPN Client) blocks out networking (other than to the target VPN destination...) with no workaround. This means that Remote Desktop or other methods of connecting to VMs that go via the Internet (e.g. GoToMyPC, LogMeIn) are a non-starter. What I'm really looking for is an EC2-like platform but which gives direct access to the VM through (for example) Hyper-V Manager. Sadly the only way they all seem to offer to remote control the VMs is direct access via Remote Desktop, whereas I need to be one layer above that (if that makes sense). A viable alternative would be to run virtualisation software within a Windows EC2 instance; obviously hardware virtualisation is impossible but I wonder if there are any software virtualisation platforms that could be run and that would work. Does anyone know if something like this exists/is possible? Thanks! C

    Read the article

  • JBoss thrown error looking up local address when starting

    - by Jeeba
    Hello im installing a fresh Jboss 5.1 in a Centos 5.5 machine. I dont have Apache installed. So when I try to start jboss using the comand ./run.sh I get the following error 15:13:57,414 INFO [JMXKernel] Legacy JMX core initialized 15:14:03,856 ERROR [ServerInfo] Error looking up local address java.net.UnknownHostException: dhcppc1: dhcppc1 at java.net.InetAddress.getLocalHost(InetAddress.java:1354) at org.jboss.system.server.ServerInfo.getHostAddress(ServerInfo.java:364) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) .... After that i can run Jboss only from 127.0.0.1:8080 but using localhost:8080 doesnt work. I think its a centos configuration problem but im a total newbye managing ports and maybe firewalls, so what do you think could be the problem?

    Read the article

  • Configure Supervisor to manage init.d services

    - by Eduard Luca
    I installed uwsgi and created a bash script, which allows me to start/stop uwsgi in the following manner: service uwsgi [start|stop]. This bash script is located in /etc/init.d/uwsgi. Now, I want to (politely) ask Supervisor to use that script to manage the uwsgi process. All the tutorials indicate that this is not the way to do it, however I do want to be able to do both service uwsgi stop and supervisorctl stop uwsgi (not sure if I nailed the syntax of the latter) -- even though I am aware that the first one will not in fact stop my service because supervisor will restart it (that's exactly what I need). Note that I'm using uwsgi in emperor mode if that matters in any way.

    Read the article

  • Finding text in Ubuntu (gnome) terminal output

    - by Rickson
    Imagine this scenario: You run a command at gnome terminal. This command has made a bunch of outputs to the terminal. After some time, you realize you need the value of a variable (let's say variable_needed) that was printed by the command somewhere in the terminal. How to find it? KDE terminal used to have a shortcut ctrl+shift+f which searched the terminal output. It seems that gnome-terminal doesn't have it (at least at Ubuntu 10.04.2 LTS). Is there any way of adding it? Is there any other good terminal I could use that has it? Notice that the output has already been written so I don't want (cannot) run the command again combined with grep, |, , vim, emacs, etc.

    Read the article

  • How to force laptop mode on/off

    - by Vi
    root@vi-notebook:/home/vi# laptop_mode start force Laptop mode enabled, not active How to start laptop mode? It starts successfully when AC adapter is removed, but not by explicit command. The system is GNU/Linux Debian i386 squeeze (not up to date), 2.6.30-zen2-31270-gc7099db-dirty, Acer Extensa 5220. Update: Changed to ENABLE_LAPTOP_MODE_ON_AC=1 in /etc/laptop_mode/laptop-mode.conf, now it is turned on always. But I can't turn it off with laptop_mode stop force, it stays turned on anyway. How do I turn it off again?

    Read the article

  • Vim lint check - only show message if there's an error

    - by GorillaSandwich
    I have this line in my .vimrc, which means "when I save a .rb file, run it through ruby -c" (the ruby interpreter's error checking). autocmd BufWritePost *.rb !ruby -c <afile> When I save that file, I always see output at the bottom of the screen, so I get used to it and start ignoring it. What I want is to only see output if there are errors. I can see that when there are errors, after it says what they are, at the bottom, it says "shell returned 1." How can I modify this line so that it only shows a message if the shell returns 1? Is there a way to conditionally surpress output from a shell command run in vim?

    Read the article

  • Network vulnerability and port scanning services

    - by DigitalRoss
    I'm setting up a periodic port scan and vulnerability scan for a medium-sized network implementing a customer-facing web application. The hosts run CentOS 5.4. I've used tools like Nmap and OpenVAS, but our firewall rules have special cases for connections originating from our own facilities and servers, so really the scan should be done from the outside. Rather than set up a VPS or EC2 server and configuring it with various tools, it seems like this could just be contracted out to a port and vulnerability scanning service. If they do it professionally they may be more up to date than something I set up and let run for a year... Any recommendations or experience doing this?

    Read the article

  • Web Service gets unavailable after several concurrent calls

    - by Roman
    We are testing GoDaddy Virtual Data Center and came to a very strange issue when our web site gets unavailable. GoDaddy Support keeps saying the issue is in our web server settings, but looking at the result of our tests I doubt it. TEST ENVIRONMENT Virtual DataCenter with Windows hosted at GoDaddy.com. All servers have Windows Server 2008 R2 Datacenter, IIS 7. Server One with IP address 10.1.0.4 Server Two with IP address 10.1.0.3 Both servers are in private network not visible from outside. Port Forward with IP address 50.62.13.174. Port Forward is assigned to Server One TEST DESCRIPTION JMeter is used as a Client App to simulate 30 concurrent users sending 100 SOAP requests each. Interval between requests is 1 second. Http link used for testing: http://50.62.13.174/v2/webservices.asmx TEST ONE Test is run from a computer in our office. After JMeter starts running test, almost immediately, the link above becomes unavailable in a browser. After test completion, the link is not available in a browser for about 5 more minutes. Remote Desktop is working well, so we can connect to Server One remotely. After about 5 minutes since test completion, the link becomes available in a browser again. TEST TWO Test is run from Server Two (that is part of our virtual data center). Test works very well, no visible delays in processing. The link is available in a browser all the time. TEST THREE Test is run from Server One using localhost. The result is the same as in TEST TWO - no issues. TEST FOUR We repeated TEST ONE from other computers that we have located in different countries, all with the same result as TEST ONE. CONCLUSION As the test works well from Server Two, but does not work from outside our virtual data center, we feel there are issues with the network or its capacity. The whole behaviour looks like out requests from outside get stuck somewhere before reaching our virtual data center. Has anybody had similar issues in the past? Are there chances that something is wrong with our server settings?

    Read the article

  • How to create a bootable USB Windows OS using Mac OS X

    - by Ali
    I'm having a trouble here because of my PC infected today and I tried everything to get it back and the only option left for me now is to do a clean install. Now what happen is I have Macbook Pro with 8GB USB which I emptied the USB stick now. I've download Windows 7 from my college website [With license not pirate] and wanted to make a bootable USB so I can format my PC to get it run again. Now I know that disk utilities can be done with DVD/CD but what I can make a bootable USB to run it on my PC?

    Read the article

  • Software to store frequently used text in PC

    - by user15660
    Hi, I am a looking for a free software that can run on the task bar (near the system time) where I can store frequently used text like my full street address, paths of specific deep folders & files in the computer etc etc. This way I can just click the icon which should popup a screen where I should be able to copy the text/string I am looking for Any ideas? thanks in advance

    Read the article

  • How to upgrade a 1.4.3 TortoiseSVN-created repository to 1.6.x?

    - by SiegeX
    A few years ago, TortoiseSVN 1.4.3 was deployed to our software development team and we are now looking at upgrading the client to the latest 1.6.x version. I had hoped this upgrade would be transparent with the additional features and modifications being client-side. For the most part, this was true except for a very important feature -- merging. When I try to merge a feature branch back into truck I get a show-stopping "Merge tracking not supported error." Here are some facts worth noting: When the repo was first created (before I was on board), it was created via the TortoiseSVN client itself. We do not have a 'svn server daemon' per se, rather the repository folders/database resides on a share folder that is accessible from our workstation machines via file:///. This was actually an eye opener for me, I had always thought there was some SVN server daemon we were talking to. We do not have any access to the underlying machine hosting the SVN share other than the ability to read/write to the share itself. I don't even know what OS the machine is running on. This share server was chosen because its drives are backed up nightly by our IT group. In all honesty, we really don't need the merge tracking feature although it would be nice to have. For the time being it would be sufficient to be able to use a 1.6.x TortoiseSVN client on the 1.4.3 repository and have it merge (sans tracking) without error. So now the question becomes, how does one upgrade a client-created 1.4.3 repo to a 1.6.x compatible version without access to the underlying machine the repo resides on? I was hoping the TortoiseSVN client itself had the ability to do this but that does not appear to be the case. Will I be forced to copy the entire repo over to my local drive, run some svn commands to upgrade the repo locally then copy the repo back to the share point? If so, will doing this break any compatibility with the the 1.4.3 clients in case we cant upgrade them all at the same time? Thanks for the help.

    Read the article

  • How to upgrade a 1.4.3 TortoiseSVN-created repository to 1.6.x?

    - by SiegeX
    A few years ago, TortoiseSVN 1.4.3 was deployed to our software development team and we are now looking at upgrading the client to the latest 1.6.x version. I had hoped this upgrade would be transparent with the additional features and modifications being client-side. For the most part, this was true except for a very important feature -- merging. When I try to merge a feature branch back into truck I get a show-stopping "Merge tracking not supported error." Here are some facts worth noting: When the repo was first created (before I was on board), it was created via the TortoiseSVN client itself. We do not have a 'svn server daemon' per se, rather the repository folders/database resides on a share folder that is accessible from our workstation machines via file:///. This was actually an eye opener for me, I had always thought there was some SVN server daemon we were talking to. We do not have any access to the underlying machine hosting the SVN share other than the ability to read/write to the share itself. I don't even know what OS the machine is running on. This share server was chosen because its drives are backed up nightly by our IT group. In all honesty, we really don't need the merge tracking feature although it would be nice to have. For the time being it would be sufficient to be able to use a 1.6.x TortoiseSVN client on the 1.4.3 repository and have it merge (sans tracking) without error. So now the question becomes, how does one upgrade a client-created 1.4.3 repo to a 1.6.x compatible version without access to the underlying machine the repo resides on? I was hoping the TortoiseSVN client itself had the ability to do this but that does not appear to be the case. Will I be forced to copy the entire repo over to my local drive, run some svn commands to upgrade the repo locally then copy the repo back to the share point? If so, will doing this break any compatibility with the the 1.4.3 clients in case we cant upgrade them all at the same time? Thanks for the help.

    Read the article

< Previous Page | 660 661 662 663 664 665 666 667 668 669 670 671  | Next Page >