Search Results

Search found 41582 results on 1664 pages for 'fault tolerance'.

Page 136/1664 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • starting dhcpd failed (Centos)

    - by tike
    i have a issue related to dhcp i have configured my dhcp according to my ip and all ... but it fails. when i try to start the service... i tried to trouble shoot and i stopped the firewall (iptables) i disabled selinux. is there any other area i need to consider.. is there anything that stops dhcpd service to start from.

    Read the article

  • SAN with iSCSI-Target Performance Horrendous

    - by Justin
    We have a poor man's SAN setup in a 1U Ubuntu server running iSCSI-Target with two 300GB drives in RAID-0. We then are using it for block level storage for virtual machines. The hypervisor is connected to the SAN via gigabit on a dedicated VLAN and interfaces. We only have a single virtual machine setup and doing some benchmarks. If we run hdparm -t /dev/sda1 from the virtual machine, we get 'ok' performance of 75MB/s from the virtual machine to the SAN. Then we basically compile a package with ./configure and make. Things start ok, but then all the sudden the load average on the SAN grows to 7+ and things slow down to a crawl. When we SSH into the SAN and run top, sure the load is 7+, but the CPU usage is basically nothing, also the server has 1.5GB of memory available. When we kill the compile on the virtual machine, slowly the LOAD on the SAN goes back to sub 1 figures. What in the world is causing this? How can we diagnosis this further? Here are two screenshot from the SAN during high load. 1> Output of iotop on the SAN: 2> Output of top on the SAN:

    Read the article

  • First Request to IIS Express Fails with 503 Service Unavailable, Second Succeeds

    - by Chris Moschini
    Each time I start my ASP.Net MVC 3 app from Visual Studio 2010, IIS Express launches and IE spins waiting. The request fails with HTTP 503 Service Unavailable. I hit Refresh in IE, and the request succeeds. All subsequent requests succeed until I stop debugging. The next time I go to start debugging, the first request fails again. Has anyone else experienced this? In IISExpress\applicationhost.config I have: <site name="ProjectName" id="6"> <application path="/" applicationPool="Clr4IntegratedAppPool"> <virtualDirectory path="/" physicalPath="c:\users\chris\dropbox\code\2010\SolutionName\ProjectName" /> </application> <bindings> <binding protocol="http" bindingInformation="*:80:laptop" /> </bindings> </site> I have this in my hosts file: 127.0.0.1 laptop And my Project is set to start with IIS Express, with Project Url set to: http://laptop It's very strange that only the first request fails, perhaps as though Visual Studio isn't waiting long enough for IIS Express to start? Is there some way to make it wait? Stopping debugging, making a change, and then starting again is one of the most common tasks I do so adding another step to get there is pretty annoying.

    Read the article

  • Debian 3.1 (Sarge) init.d boot order

    - by Adam Lewis
    I am using a TS-7800 single board computer from Technologic Systems that ships with Debian 3.1 (Sarge). I have updated it to Squeeze, but due to various driver issues I have been forced to roll back to Sarge. I am attempting to configure the various drivers and configurations needed for my application services before they start. Ideally I would call one init.d script that contains the drivers / configurations then call the other init.d scripts (one for each process). I am left scratching my head on how to guarantee the boot sequence. I know in later versions of Debian I could use the lbs-header to achieve this; but is there anything comparable to the LBS header in Sarge?

    Read the article

  • Missing NIC and USB devices

    - by MJ
    Coming into work today, I've found we have a fwe different computers (different companies/networks/OS versions - all windows based) that are all having the same issue. 1) Network NIC is not able to be viewed from network connections. If you refresh, its saying the service is not started. Services state the service is started and running. 2) USB devices are not recognized when plugged in, scan for hardware changes, etc. We have managed AV, that is kept up to date, and a managed patch policy that has all these machines at the most recent patch. I'm just wondering if anyone else has experienced these same symptoms, and what they have done to resolve them.

    Read the article

  • scalability: when to use CDN?

    - by ajsie
    i've read about CDN but dont know exactly what it is for. lets say i've got an international social network (text and image content), and it's growing in traffic from different countries, do i have use of CDN? the picture i got from the sources i've read is that it copy your content and put it in many servers spread out over the world so that users will fetch it from the nearest point. does this mean that every server has a copy of my mysql database and the image files? is this the proper way to make your web service available for the world? cause how else could you set up a servers through out the world, contacting hosting companies for each country?

    Read the article

  • URL Rewriting on GoDaddy Virtual Server

    - by Aristotle
    I migrated a Kohana2 application from a shared-hosting environment over to a virtual dedicated server. After this migration, I can't seem to get my .htaccess file working again. I apologize up front, but over the years I have never experienced so much frustration with anything else as I do with the dreaded .htaccess file. Presently I have my project installed immediately within a directory in my public folder: /var/html/www/info.php (general information about server) /var/html/www/logo.jpg (some flat file) /var/html/www/somesite.com/[kohana site exists here] So my .htaccess file is within that directory, and has the following contents: # Turn on URL rewriting RewriteEngine On # Installation directory RewriteBase /somesite.com/ # Protect application and system files from being viewed # This is only necessary when these files are inside the webserver document root RewriteRule ^(application|modules|system) - [R=404,L] # Allow any files or directories that exist to be displayed directly RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d # Rewrite all other URLs to index.php/URL RewriteRule .* index.php?kohana_uri=$0 [PT,QSA,L] # Alternativly, if the rewrite rule above does not work try this instead: #RewriteRule .* index.php?kohana_uri=$0 [PT,QSA,L] This doesn't work. The initial controller is loaded, since index.php is called up implicitly when nothing else is in the url. But if I try to load up some other non-default controller, the site fails. If I place the index.php back within the url, the call to other controllers works just fine. I'm really at my wits end, and would appreciate some direction here.

    Read the article

  • Cannot install jdk 1.5 on Ubuntu 12.04

    - by u123
    I have installed Ubuntu 12.04.1 LTS (GNU/Linux 3.2.0-23-generic x86_64). Some info about the machine: $ grep --color "model name" /proc/cpuinfo model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz I need to install jdk5 to support an old application. I have tried: ~$ sudo apt-get install openjdk-5-jdk Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package openjdk-5-jdk I have also tried: ~$ sudo apt-get install sun-java5-jdk Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package sun-java5-jdk So its not available in the repos. I have tried to follow this guide (adding the jaunty repos): http://leonardo-pinho.blogspot.dk/2010/11/java-15-no-ubuntu-1010.html but same result. Then I have tried to download jdk-1_5_0_22-linux-i586.bin from here: http://www.oracle.com/technetwork/java/javasebusiness/downloads/java-archive-downloads-javase5-419410.html#jdk-1.5.0_22-oth-JPR and do: ~$ chmod a+x jdk-1_5_0_22-linux-i586.bin ~$ sudo ./jdk-1_5_0_22-linux-i586.bin Sun Microsystems, Inc. Binary Code License Agreement yes Unpacking... Checksumming... 0 0 Extracting... ./jdk-1_5_0_22-linux-i586.bin: 424: ./jdk-1_5_0_22-linux-i586.bin: ./install.sfx.19556: not found ./jdk-1_5_0_22-linux-i586.bin: 1: cd: can't cd to jdk1.5.0_22 Any suggestions?

    Read the article

  • Apache2 Virtual Host with ScriptAlias returning 403

    - by sissonb
    I am trying to reference my libs directory which is a sibling directory to my DocumentRoot. I am using the following ScriptAlias to try to accomplish this. ScriptAlias /libs/ "../libs" But when I go to example.com/libs/ I get a the following error Forbidden You don't have permission to access /libs/ on this server I am able to view the libs directory using the following configuration so I don't think it's a file permission error. <VirtualHost *> ServerName example.com ServerAlias www.example.com DocumentRoot C:/www/libs <VirtualHost *> More relevant httpd.cong setting below <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> <Directory "C:/www"> Options Indexes FollowSymLinks AllowOverride None Order Deny,Allow Deny from none Allow from all </Directory> NameVirtualHost * <VirtualHost *> ServerName example.com ServerAlias www.example.com DocumentRoot C:/www/example ScriptAlias /libs/ "../libs" <Directory "C:/www/libs"> Options Indexes FollowSymLinks AllowOverride None Options +ExecCGI Order Deny,Allow Deny from none Allow from all </Directory> </VirtualHost>

    Read the article

  • VMWare Server Windows 2008 NAT Problem

    - by David
    At my new job our workstations run Windows Server 2008. However, for the specific task for which I've been hired, I need to set up a couple Linux VMs. So I grabbed the free VMWare Server and created an Ubuntu image and a Slackware image. (The former to more closely mimic the production server, the latter because I'm more familiar with it.) For desktop security purposes I need to use NAT for the network access (I would have preferred bridged, but I'm told that would go against some policy here and my whole workstation would be sandboxed from the switch). However, I can't seem to get it working right. I can ping out from the VMs to LAN addresses as well as internet addresses. I can resolve DNS names. However, attempts to use a web browser or perform any kind of higher-level interaction like that just time out. Googling around yesterday led me to various workarounds that were similar, but didn't solve my specific situation. (For example, Norton firewall blocking the connection on the host, or even the Windows firewall.) I also saw some forum posts where people said it's a known issue with VMWare and Windows Server 2008 (and Windows 7). So far I haven't been able to find a suggestion that gets me past this roadblock. I'm really not very familiar with managing a Windows Server 2008 box, so it's possible there's just some security setting somewhere that I need to modify. Does anybody have any suggestions on where I should look? UPDATE: I'm now looking at the "Network and Sharing Center" on the host workstation and it shows "VMWare Network Adapter VMnet8" (which is what I'm using) as an "Unidentified network" with "No Internet access." Looks like I can't modify ICS under the group policy. Any suggestions on how to allow this connection to have internet access?

    Read the article

  • How to configure Apache and Tomcat with vhosts?

    - by Umar Farooq Khawaja
    I have a server with a static, public IP address. I also have a registered domain name. For the sake of illustration, let's suppose they are IP Address: 12.34.56.78 Domain Name: example.com I have a single machine on which I am running the following: A website (over IIS7) available locally at localhost:80 A JetBrains TeamCity instance (over Tomcat) available locally at localhost:1234 A VisualSVN Server instance (over Apache) available locally at localhost:5678/svn I have set up an A record for example.com and the following CNAME records: www.example.com builds.example.com sources.example.com I would like to configure Tomcat and Apache such that: if I point my browser at builds.example.com, I end up at the JetBrains TeamCity instance and, if I point my browser at sources.example.com, I end up at the VisualSVN Server instance. I thought I could configure the Apache to vhost example.com:5678/svn to point to sources.example.com and added the following lines to the Apache httpd.conf file Listen 5678 NameVirtualHost *:5678 <VistualHost *:5678> ServerName sources.example.com DocumentRoot /svn </virtualHost> That broke the VisualSVN instance, so I had to revert that to Listen 5678 Help!

    Read the article

  • Windows: what is the difference between DEP always on and DEP opt-out with no exceptions?

    - by Peter Mortensen
    What is the difference between DEP always on ("/NoExecute=AlwaysOn" in boot.ini) and DEP opt-out ( "/NoExecute=OptOut" in boot.ini) with no exceptions? "no exceptions" = empty list of programs for which DEP does not apply. DEP = Data Execution Prevention (hardware). One would expect it to work the same way, but it makes a difference for some applications. E.g. for all versions of UltraEdit 14 (14.2). It crashes at startup for DEP always on, at least on Microsoft Windows XP Professional Edition x64 edition. (2010-03-11: this problem has been fixed with UltraEdit 15.2 and later.) Update 1: I think this difference is caused by the backdoors that Microsoft has put into hardware DEP for OptOut, according to Fabrice Roux (see below). In the case of IrfanView, for which Steve Gibson observed the same difference as I did for UltraEdit (see below), the difference is caused by a non-DEP aware EXE packer (ASPack) that Microsoft coded a backdoor for. Is there a difference between Windows XP, Windows Vista and Windows 7 ? Is there a difference between 32 bit and 64 bit versions of Windows ? Sources: From [http://blog.fabriceroux.com/index.php/2007/02/26/hardware_dep_has_a_backdoor?blog=1], "Hardware DEP has a backdoor" by Fabrice Roux. 2007-02-26. "IrfanView was not using any trick to evade DEP ... Microsoft just coded a backdoor used only in OPTOUT. Bascially Microsoft checks the executable header for a section matching one of the 3 strings. If one these strings is found, DEP will be turned OFF for this application by windows. ... 'aspack', 'pcle', 'sforce'" From [http://www.grc.com/sn/sn-078.htm], by Steve Gibson. "I can’t find any documentation on Microsoft’s site anywhere, because we’re seeing a difference between always-on and opt-out. That is, you would imagine that always-on mode would be the same as opting out if you weren’t having any opt-out programs. It turns out it’s not the case. For example ... the IrfanView file viewer ... runs fine in opt-out mode, even if it has not been opted out. But it won’t launch, Windows blocks it from launching ... in always-on mode." From [http://www.grc.com/sn/sn-083.htm], by Steve Gibson. "... IrfanView ... won’t run with DEP turned on. It’s because it uses an EXE packer, an executable compression program called ASPack. And it makes sense that it wouldn’t because naturally an executable compressor has got to decompress the executable, so it allocates a bunch of data memory into which it decompresses the compressed executable, and then it runs it. Well, it’s running a data allocation, which is exactly what DEP is designed to stop. On the other hand, UPX, which is actually the leading and most popular EXE compressor, it’s DEP- compatible because those guys realized, hey, when we allocate this memory, we should mark the pages as executable."

    Read the article

  • Setting WMI permissions remotely

    - by christianlinnell
    I've developed a tool that does a simple retrieval of registered services and installed applications from remote Windows Server 2003 servers via WMI. My problem is, the tool needs to be run on an ad hoc basis by a user who is not an administrator of those servers. I've created a domain user (which the tool will use to run the query) that I'd like to grant remote WMI permission on each server, but given there are about 200 servers, I can't do it manually. Is there a way to grant access to that domain user via WMI, or by distributing a registry change via SMS or Group Policy?

    Read the article

  • Need Help Accessing the Vista Wampserver localhost from Virtual PC 2007 running an XP VM.

    - by Reg
    (I had posted this on stack overflow but it was suggested there that I post it here instead). I have a Vista laptop on which I'm running wampserver. I have Virtual PC 2007 setup with Windows XP running on the VM. My goal is to be able to use the XP VM to run IE6 to view the localhost in the Vista wampserver. I'm not interested in having the XP VM have any access to the internet -- only to my Vista wampserver's localhost. The vista wampserver works fine. As suggested on a blog I read, I installed the loopback adapter on Vista and I set the loopback to 192.168.21.1 and I set the xp vm ip to 192.168.21.2. I am able to successfully ping the vista-loopback adapter from the xp vm. I've turned the wampserver to "server online", and I've disabled the firewalls in both the vista host and the xp vm. But for some reason, I still can't seem to get the virtual XP to see the localhost on the vista wampserver. I've tried using the vista //name, and I've tried the ip 192.168.21.1 directly and with the port. For whatever its worth, I'm not able to see anything under the XM VM's network places (though I don't know if I'm supposed to be able to see anything). So at this point I'm stuck and I'm still not sure how to get this XP VM to "talk" to my vista wampserver localhost. Any advice on how to fix this problem is much appreciated. Thanks in advance for your help. -R

    Read the article

  • git post-receive hook throws "command not found" error but seems to run properly and no errors when run manually

    - by Ben
    I have a post-receive hook that runs on a central git repository set up with gitolite to trigger a git pull on a staging server. It seems to work properly, but throws a "command not found" error when it is run. I am trying to track down the source of the error, but have not had any luck. Running the same commands manually does not produce an error. The error changes depending on what was done in the commit that is being pushed to the central repository. For instance, if 'git rm ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Removed: command not found" and if 'git add ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Merge: command not found". In either case the 'git pull' run on the staging server works correctly despite the error message. Here is the post-receive script: #!/bin/bash # # This script is triggered by a push to the local git repository. It will # ssh into a remote server and perform a git pull. # # The SSH_USER must be able to log into the remote server with a # passphrase-less SSH key *AND* be able to do a git pull without a passphrase. # # The command to actually perform the pull request on the remost server comes # from the ~/.ssh/authorized_keys file on the REMOTE_HOST and is triggered # by the ssh login. SSH_USER="remoteuser" REMOTE_HOST="staging.server.com" `ssh $SSH_USER@$REMOTE_HOST` # This is line 16 echo "Done!" The command that does the git pull on the staging server is in the ssh user's ~/.ssh/authorized_keys file and is: command="cd /var/www/staging_site; git pull",no-port-forwarding,no-X11-forwarding,no-agent-forwarding, ssh-rsa AAAAB3NzaC1yc2EAAAABIwAA... (the rest of the public key) This is the actual output from removing a file from my local repo, committing it locally, and pushing it to the central git repo: ben@tamarack:~/thejibe/testing/web$ git rm ./testing rm 'testing' ben@tamarack:~/thejibe/testing/web$ git commit -a -m "Remove testing file" [master bb96e13] Remove testing file 1 files changed, 0 insertions(+), 5 deletions(-) delete mode 100644 testing ben@tamarack:~/thejibe/testing/web$ git push Counting objects: 3, done. Delta compression using up to 2 threads. Compressing objects: 100% (2/2), done. Writing objects: 100% (2/2), 221 bytes, done. Total 2 (delta 1), reused 0 (delta 0) remote: From [email protected]:testing remote: aa72ad9..bb96e13 master -> origin/master remote: hooks/post-receive: line 16: Removed: command not found # The error msg remote: Done! To [email protected]:testing aa72ad9..bb96e13 master -> master ben@tamarack:~/thejibe/testing/web$ As you can see the post-receive script gets to the echo "Done!" line and when I look on the staging server the git pull has been successfully run, but there's still that nagging error message. Any suggestions on where to look for the source of the error message would be greatly appreciated. I'm tempted to redirect stderr to /dev/null but would prefer to know what the problem is.

    Read the article

  • Including hostname in apache logwatch reports

    - by Robert Munteanu
    When hosting multiple domains with apache it's useful to see the logwatch apache output with the virtual host name included, but I only get: --------------------- httpd Begin ------------------------ Requests with error response codes 400 Bad Request /: 1 Time(s) /robots.txt: 1 Time(s) whereas I would like something like --------------------- httpd Begin ------------------------ Requests with error response codes 400 Bad Request example.com/: 1 Time(s) example.org/robots.txt: 1 Time(s) How can I achieve this with logwatch?

    Read the article

  • XenServer and ZFS via NFS

    - by Jeroen Jacobs
    I'm trying to connect a NFS share to XenCenter. The NFS server is a ZFSGuru distro (uses FreeBSD). The zfs volume was exported like this: /sbin/zfs set sharenfs="on" temppool/share According to "showmount", it's available: showmount -e /temppool/share Everyone However, when I try to connect to it with XenServer (so it can be used as storage for VHD), I get the following error: Internal error:Failure("Storage_access failed with: SR_BACKEND_FAILURE_73: [; NFS mount error[opterr=mount failed with return code 32]; ]") Anyone got an idea? Update: This is from the log on the NFS server: Sep 3 16:23:10 zfsguru mountd[962]: mount request from 192.168.10.217 for non e xistent path /temppool/share/7c8d3f2f-e0e0-5263-ccad-1cd32a4139cf Sep 3 16:23:10 zfsguru mountd[962]: mount request denied from 192.168.10.217 fo r /temppool/share/7c8d3f2f-e0e0-5263-ccad-1cd32a4139cf Sep 3 16:23:11 zfsguru mountd[962]: mount request from 192.168.10.217 for non e xistent path /temppool/share/7c8d3f2f-e0e0-5263-ccad-1cd32a4139cf Sep 3 16:23:11 zfsguru mountd[962]: mount request denied from 192.168.10.217 fo r /temppool/share/7c8d3f2f-e0e0-5263-ccad-1cd32a4139cf Sep 3 16:28:20 zfsguru mountd[962]: mount request denied from 192.168.10.217 fo r /temppool/share/17922178-0dfb-edf3-0037-2eddd79b9d02 Sep 3 16:28:43 zfsguru last message repeated 5 times Sep 3 16:35:00 zfsguru mountd[962]: mount request denied from 192.168.10.217 fo r /temppool/share/b5735ccf-1997-8d77-83a0-2f34e37dda8d Sep 3 16:35:33 zfsguru last message repeated 4 times Sep 3 16:35:34 zfsguru mountd[962]: mount request denied from 192.168.10.217 fo r /temppool/share/b5735ccf-1997-8d77-83a0-2f34e37dda8d It seems XenServer is able to create the directories, but is enable to mount them afterwards.

    Read the article

  • Install Glibc2 using Yum

    - by Nerrve
    I'm trying to install glibc2 version 2.11 that's needed for openoffice 3.4 https://issues.apache.org/ooo/show_bug.cgi?id=119393 but i can't seem to find the dependency with yum. I already have the following dependencies installed. glibc.i686 2.5-49.el5_5.7 installed glibc.x86_64 2.5-49.el5_5.7 installed glibc-common.x86_64 2.5-49.el5_5.7 installed glibc-devel.x86_64 2.5-49.el5_5.7 installed glibc-headers.x86_64 2.5-49.el5_5.7 installed libc-client.x86_64 2004g-2.2.1 installed and glibc.i686 2.5-81.el5_8.2 updates glibc.x86_64 2.5-81.el5_8.2 updates glibc-common.x86_64 2.5-81.el5_8.2 updates glibc-devel.i386 2.5-81.el5_8.2 updates glibc-devel.x86_64 2.5-81.el5_8.2 updates glibc-headers.x86_64 2.5-81.el5_8.2 updates glibc-utils.x86_64 2.5-81.el5_8.2 updates I ran the following to get the version but it shows something different [root@***** /]# ./lib64/libc.so.6 GNU C Library stable release version 2.5, by Roland McGrath et al. Can someone please help? Thanks! EDIT : I'm using CentOS 2.6.18-128.1.10.el5

    Read the article

  • configure squid with windows 2008

    - by G.a.r.y.
    Hi my problem is this: I have a 3 pc (192.168.1.2,..3,..4) and a windows 2008 server (192.168.1.100) router is 192.168.1.1. I just want that the 3 pc set like gateway 192.168.1.100, are filter by squid proxy loaded in win2008 so in win2008 I 've set in control panel the proxy 192.168.1.100:3128 and in win2008 browser work, the connection is filtered by proxy, but in 3 pc not works, so maybe I should route all incoming request into squid, but I dunno how ... thanks

    Read the article

  • Assistance in setting up new APC Smart-UPS RT on a new VMware enviroment

    - by user38085
    I'm new to the realm of setting up a APC Smart-UPS RT 8000VA UPS with a management network card (AP9618). The project calls for the upgrade of the firmware for the network card to the newest and greatest. It also calls for the Powerchute Business Software to be installed with notifications setup per email for temperature, shutdown, and battery low. I know I'll have to use the serial cable to flash the firmware and install the software on one server 2003 box. Also on that server I'll have to install the software and setup the GUI (IP address) interface. Whats confusing the most is the whole process, and steps to use without taking down the network, which would be very bad. In flashing the firmware does it take down the UPS? Do i have to run BOOTP commands to setup the network card? Also no agents will be used on any of the VWware OS's and no SNMP trap will be used.

    Read the article

  • How to hide files in Apache 2.2 WebDAV Directory listings

    - by mdornsf
    I use Apache 2.2 as WebDAV file server to a bunch of Mac and MS Windows clients. Unfortunately both clutter the filesystem with files like .DS_Store or thumbs.db. Since hte files distract my users i want to hide them from directory listings. Unfortunately the standard way of hiding files in Apache (via IndexIgnore) seems not to work via WebDAV. Is there any other way to hide files?

    Read the article

  • Backup Exec 11D and Thecus N5200 Pro

    - by JohnyD
    I have a Thecus N5200 Pro integrated into my Windows 2003 AD network. My current backup solution involves Backup Exec 11D but this requires a running service for windows boxes or a similar daemon for linux machines. The N5200 runs a custom linux kernel but as of yet I am unable to add it to my backups through Backup Exec. Does anyone know of a method of backing up directly from the N5200 to Backup Exec without moving the data to an intermediary for archiving?

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >